killer.sh模拟考

2022年7月份模拟考题解答

解答方案不止一种,以下是官网提供的答案,只要能正确输出答案,都会得分。下面给出了模拟题一的1-25题,附加题另起一篇,有需要可以查看附加题解答。正式题目有17-20题,没有附加题。

Pre Setup

这一步是设置一些快捷命令的参数,如生成yaml文件,tab的宽度,这个要注意环境,切换节点后,这个环境变量就是另外的了,主要设置base节点和其他集群的master节点,重点把这个do、now这两个复制到PSI ubuntu里面的笔记本里面,可以直接复制用。

Once you’ve gained access to your terminal it might be wise to spend ~1 minute to setup your environment. You could set these:
alias k=kubectl # will already be pre-configured
export do=“–dry-run=client -o yaml” # k get pod x $do
export now=“–force --grace-period 0” # k delete pod x $now
Vim
To make vim use 2 spaces for a tab edit ~/.vimrc to contain:
set tabstop=2
set expandtab
set shiftwidth=2

Question 1 | Contexts

Task weight: 1%
这一题考察contexts,主要注意,不要把标头也写到答案里面去了。

You have access to multiple clusters from your main terminal through kubectl contexts. Write all those context names into /opt/course/1/contexts.
Next write a command to display the current context into /opt/course/1/context_default_kubectl.sh, the command should use kubectl.
Finally write a second command doing the same thing into /opt/course/1/context_default_no_kubectl.sh, but without the use of kubectl.

Answer:

Maybe the fastest way is just to run:
k config get-contexts # copy manually
k config get-contexts -o name > /opt/course/1/contexts(这里的-o name就是去掉了标头)
Or using jsonpath:
k config view -o yaml # overview
k config view -o jsonpath=“{.contexts[].name}"
k config view -o jsonpath="{.contexts[
].name}” | tr " " “\n” # new lines
k config view -o jsonpath=“{.contexts[*].name}” | tr " " “\n” > /opt/course/1/contexts

Next create the first command:
/opt/course/1/context_default_kubectl.sh
kubectl config current-context
And the second one:
/opt/course/1/context_default_no_kubectl.sh
cat ~/.kube/config | grep current | sed -e “s/current-context: //” /opt/course/1/context_default_no_kubectl.sh

Question 2 | Schedule Pod on Master Node

Task weight: 3%

Use context: kubectl config use-context k8s-c1-H

Create a single Pod of image httpd:2.4.41-alpine in Namespace default. The Pod should be named pod1 and the container should be named pod1-container. This Pod should only be scheduled on a master node, do not add new labels any nodes.
Shortly write the reason on why Pods are by default not scheduled on master nodes into /opt/course/2/master_schedule_reason .

Answer:

First we find the master node(s) and their taints:
k get node # find master node
k describe node cluster1-master1 | grep Taint # get master node taints
k describe node cluster1-master1 | grep Labels -A 10 # get master node labels
k get node cluster1-master1 --show-labels # OR: get master node labels

Next we create the Pod template:
check the export on the very top of this document so we can use $do
k run pod1 --image=httpd:2.4.41-alpine $do > 2.yaml
vim 2.yaml
Perform the necessary changes manually. Use the Kubernetes docs and search for example for tolerations and nodeSelector to find examples:

apiVersion: v1
kind: Pod
metadata:creationTimestamp: nulllabels:run: pod1name: pod1
spec:containers:- image: httpd:2.4.41-alpinename: pod1-container                  # changeresources: {}dnsPolicy: ClusterFirstrestartPolicy: Alwaystolerations:                            # add- effect: NoSchedule                    # addkey: node-role.kubernetes.io/master   # addnodeSelector:                           # addnode-role.kubernetes.io/master: ""    # add
status: {}

Important here to add the toleration for running on master nodes, but also the nodeSelector to make sure it only runs on master nodes. If we only specify a toleration the Pod can be scheduled on master or worker nodes.
Now we create it:
k -f 2.yaml create
Finally the short reason why Pods are not scheduled on master nodes by default:
/opt/course/2/master_schedule_reason
master nodes usually have a taint defined

Question 3 | Scale down StatefulSet

Task weight: 1%

Use context: kubectl config use-context k8s-c1-H

There are two Pods named o3db-* in Namespace project-c13. C13 management asked you to scale the Pods down to one replica to save resources. Record the action.

Answer:

If we check the Pods we see two replicas:
➜ k -n project-c13 get pod | grep o3db
o3db-0 1/1 Running 0 52s
o3db-1 1/1 Running 0 42s
From their name it looks like these are managed by a StatefulSet. But if we’re not sure we could also check for the most common resources which manage Pods:
➜ k -n project-c13 get deploy,ds,sts | grep o3db
statefulset.apps/o3db 2/2 2m56s
Confirmed, we have to work with a StatefulSet. To find this out we could also look at the Pod labels:
➜ k -n project-c13 get pod --show-labels | grep o3db
o3db-0 1/1 Running 0 3m29s app=nginx,controller-revision-hash=o3db-5fbd4bb9cc,statefulset.kubernetes.io/pod-name=o3db-0
o3db-1 1/1 Running 0 3m19s app=nginx,controller-revision-hash=o3db-5fbd4bb9cc,statefulset.kubernetes.io/pod-name=o3db-1
To fulfil the task we simply run:
➜ k -n project-c13 scale sts o3db --replicas 1 --record
statefulset.apps/o3db scaled
➜ k -n project-c13 get sts o3db
NAME READY AGE
o3db 1/1 4m39s

Question 4 | Pod Ready if Service is reachable

Task weight: 4%

Use context: kubectl config use-context k8s-c1-H

Do the following in Namespace default. Create a single Pod named ready-if-service-ready of image nginx:1.16.1-alpine. Configure a LivenessProbe which simply runs true. Also configure a ReadinessProbe which does check if the url http://service-am-i-ready:80 is reachable, you can use wget -T2 -O- http://service-am-i-ready:80 for this. Start the Pod and confirm it isn’t ready because of the ReadinessProbe.
Create a second Pod named am-i-ready of image nginx:1.16.1-alpine with label id: cross-server-ready. The already existing Service service-am-i-ready should now have that second Pod as endpoint.
Now the first Pod should be in ready state, confirm that.

Answer:

It’s a bit of an anti-pattern for one Pod to check another Pod for being ready using probes, hence the normally available readinessProbe.httpGet doesn’t work for absolute remote urls. Still the workaround requested in this task should show how probes and Pod<->Service communication works.
First we create the first Pod:
k run ready-if-service-ready --image=nginx:1.16.1-alpine $do > 4_pod1.yaml
vim 4_pod1.yaml
Next perform the necessary additions manually:

apiVersion: v1
kind: Pod
metadata:creationTimestamp: nulllabels:run: ready-if-service-readyname: ready-if-service-ready
spec:containers:- image: nginx:1.16.1-alpinename: ready-if-service-readyresources: {}livenessProbe:                               # add from hereexec:command:- 'true'readinessProbe:exec:command:- sh- -c- 'wget -T2 -O- http://service-am-i-ready:80'   # to herednsPolicy: ClusterFirstrestartPolicy: Always
status: {}

Then create the Pod:
k -f 4_pod1.yaml create
Now we create the second Pod:
k run am-i-ready --image=nginx:1.16.1-alpine --labels=“id=cross-server-ready”

Question 5 | Kubectl sorting

Task weight: 1%

Use context: kubectl config use-context k8s-c1-H

There are various Pods in all namespaces. Write a command into /opt/course/5/find_pods.sh which lists all Pods sorted by their AGE (metadata.creationTimestamp).
Write a second command into /opt/course/5/find_pods_uid.sh which lists all Pods sorted by field metadata.uid. Use kubectl sorting for both commands.

Answer:

A good resources here (and for many other things) is the kubectl-cheat-sheet. You can reach it fast when searching for “cheat sheet” in the Kubernetes docs.
/opt/course/5/find_pods.sh
kubectl get pod -A --sort-by=.metadata.creationTimestamp
For the second command:
/opt/course/5/find_pods_uid.sh
kubectl get pod -A --sort-by=.metadata.uid

Question 6 | Storage, PV, PVC, Pod volume

Task weight: 8%

Use context: kubectl config use-context k8s-c1-H

Create a new PersistentVolume named safari-pv. It should have a capacity of 2Gi, accessMode ReadWriteOnce, hostPath /Volumes/Data and no storageClassName defined.
Next create a new PersistentVolumeClaim in Namespace project-tiger named safari-pvc . It should request 2Gi storage, accessMode ReadWriteOnce and should not define a storageClassName. The PVC should bound to the PV correctly.
Finally create a new Deployment safari in Namespace project-tiger which mounts that volume at /tmp/safari-data. The Pods of that Deployment should be of image httpd:2.4.41-alpine.

Answer

vim 6_pv.yaml
Find an example from https://kubernetes.io/docs and alter it:

kind: PersistentVolume
apiVersion: v1
metadata:name: safari-pv
spec:capacity:storage: 2GiaccessModes:- ReadWriteOncehostPath:path: "/Volumes/Data"

Then create it:
k -f 6_pv.yaml create
Next the PersistentVolumeClaim:
vim 6_pvc.yaml
Find an example from https://kubernetes.io/docs and alter it:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:name: safari-pvcnamespace: project-tiger
spec:accessModes:- ReadWriteOnceresources:requests:storage: 2Gi

Then create:
k -f 6_pvc.yaml create
Next we create a Deployment and mount that volume:
k -n project-tiger create deploy safari --image=httpd:2.4.41-alpine $do > 6_dep.yaml
vim 6_dep.yaml
Alter the yaml to mount the volume:

apiVersion: apps/v1
kind: Deployment
metadata:creationTimestamp: nulllabels:app: safariname: safarinamespace: project-tiger
spec:replicas: 1selector:matchLabels:app: safaristrategy: {}template:metadata:creationTimestamp: nulllabels:app: safarispec:volumes:                                      # add- name: data                                  # addpersistentVolumeClaim:                      # addclaimName: safari-pvc                     # addcontainers:- image: httpd:2.4.41-alpinename: containervolumeMounts:                               # add- name: data                                # addmountPath: /tmp/safari-data               # add

k -f 6_dep.yaml create

Question 7 | Node and Pod Resource Usage

Task weight: 1%

Use context: kubectl config use-context k8s-c1-H

The metrics-server hasn’t been installed yet in the cluster, but it’s something that should be done soon. Your college would already like to know the kubectl commands to:
show node resource usage
show Pod and their containers resource usage
Please write the commands into /opt/course/7/node.sh and /opt/course/7/pod.sh.

Answer:

The command we need to use here is top:
/opt/course/7/node.sh
kubectl top node

For the second file we might need to check the docs again:
➜ k top pod -h
Display Resource (CPU/Memory/Storage) usage of pods.

Namespace in current context is ignored even if specified with --namespace.
–containers=false: If present, print usage of containers within a pod.
–no-headers=false: If present, print output without headers.

With this we can finish this task:
/opt/course/7/pod.sh
kubectl top pod --containers=true

Question 8 | Get Master Information

Task weight: 2%

Use context: kubectl config use-context k8s-c1-H

Ssh into the master node with ssh cluster1-master1. Check how the master components kubelet, kube-apiserver, kube-scheduler, kube-controller-manager and etcd are started/installed on the master node. Also find out the name of the DNS application and how it’s started/installed on the master node.
Write your findings into file /opt/course/8/master-components.txt. The file should be structured like:
/opt/course/8/master-components.txt
kubelet: [TYPE]
kube-apiserver: [TYPE]
kube-scheduler: [TYPE]
kube-controller-manager: [TYPE]
etcd: [TYPE]
dns: [TYPE] [NAME]
Choices of [TYPE] are: not-installed, process, static-pod, pod

Answer:

We could start by finding processes of the requested components, especially the kubelet at first:
➜ ssh cluster1-master1
root@cluster1-master1:~# ps aux | grep kubelet # shows kubelet process
/opt/course/8/master-components.txt
kubelet: process
kube-apiserver: static-pod
kube-scheduler: static-pod
kube-scheduler-special: static-pod (status CrashLoopBackOff)
kube-controller-manager: static-pod
etcd: static-pod
dns: pod coredns

Question 9 | Kill Scheduler, Manual Scheduling

Task weight: 5%

Use context: kubectl config use-context k8s-c2-AC

Ssh into the master node with ssh cluster2-master1. Temporarily stop the kube-scheduler, this means in a way that you can start it again afterwards.
Create a single Pod named manual-schedule of image httpd:2.4-alpine, confirm its created but not scheduled on any node.
Now you’re the scheduler and have all its power, manually schedule that Pod on node cluster2-master1. Make sure it’s running.
Start the kube-scheduler again and confirm its running correctly by creating a second Pod named manual-schedule2 of image httpd:2.4-alpine and check if it’s running on cluster2-worker1.

Answer:

Stop the Scheduler
First we find the master node:
➜ k get node
Then we connect and check if the scheduler is running:
➜ ssh cluster2-master1
➜ root@cluster2-master1:~# kubectl -n kube-system get pod | grep schedule
Kill the Scheduler (temporarily):
➜ root@cluster2-master1:~# cd /etc/kubernetes/manifests/
➜ root@cluster2-master1:~# mv kube-scheduler.yaml …

Create a Pod
k run manual-schedule --image=httpd:2.4-alpine
Manually schedule the Pod
k get pod manual-schedule -o yaml > 9.yaml

apiVersion: v1
kind: Pod
metadata:creationTimestamp: "2020-09-04T15:51:02Z"labels:run: manual-schedulemanagedFields:
...manager: kubectl-runoperation: Updatetime: "2020-09-04T15:51:02Z"name: manual-schedulenamespace: defaultresourceVersion: "3515"selfLink: /api/v1/namespaces/default/pods/manual-scheduleuid: 8e9d2532-4779-4e63-b5af-feb82c74a935
spec:nodeName: cluster2-master1        # add the master node namecontainers:- image: httpd:2.4-alpineimagePullPolicy: IfNotPresentname: manual-scheduleresources: {}terminationMessagePath: /dev/termination-logterminationMessagePolicy: FilevolumeMounts:- mountPath: /var/run/secrets/kubernetes.io/serviceaccountname: default-token-nxnc7readOnly: truednsPolicy: ClusterFirst
...

The only thing a scheduler does, is that it sets the nodeName for a Pod declaration. How it finds the correct node to schedule on, that’s a very much complicated matter and takes many variables into account.
As we cannot kubectl apply or kubectl edit , in this case we need to delete and create or replace:
k -f 9.yaml replace --force
Start the scheduler again
➜ ssh cluster2-master1
➜ root@cluster2-master1:~# cd /etc/kubernetes/manifests/
➜ root@cluster2-master1:~# mv …/kube-scheduler.yaml .
Schedule a second test Pod:
k run manual-schedule2 --image=httpd:2.4-alpine

Question 10 | RBAC ServiceAccount Role RoleBinding

Task weight: 6%

Use context: kubectl config use-context k8s-c1-H

Create a new ServiceAccount processor in Namespace project-hamster. Create a Role and RoleBinding, both named processor as well. These should allow the new SA to only create Secrets and ConfigMaps in that Namespace.

Answer:

Let’s talk a little about RBAC resources
A ClusterRole|Role defines a set of permissions and where it is available, in the whole cluster or just a single Namespace.
A ClusterRoleBinding|RoleBinding connects a set of permissions with an account and defines where it is applied, in the whole cluster or just a single Namespace.
Because of this there are 4 different RBAC combinations and 3 valid ones:
Role + RoleBinding (available in single Namespace, applied in single Namespace)
ClusterRole + ClusterRoleBinding (available cluster-wide, applied cluster-wide)
ClusterRole + RoleBinding (available cluster-wide, applied in single Namespace)
Role + ClusterRoleBinding (NOT POSSIBLE: available in single Namespace, applied cluster-wide)

To the solution
We first create the ServiceAccount:
➜ k -n project-hamster create sa processor
Then for the Role:
k -n project-hamster create role processor --verb=create --resource=secret --resource=configmap
Which will create a Role like:
kubectl -n project-hamster create role processor --verb=create --resource=secret --resource=configmap

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:name: processornamespace: project-hamster
rules:
- apiGroups:- ""resources:- secrets- configmapsverbs:- create

Now we bind the Role to the ServiceAccount:
k -n project-hamster create rolebinding processor
–role processor
–serviceaccount project-hamster:processor

This will create a RoleBinding like:
kubectl -n project-hamster create rolebinding processor --role processor --serviceaccount project-hamster:processor

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:name: processornamespace: project-hamster
roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: processor
subjects:
- kind: ServiceAccountname: processornamespace: project-hamster

To test our RBAC setup we can use
kubectl auth can-i:
➜ k -n project-hamster auth can-i create secret
–as system:serviceaccount:project-hamster:processor
➜ k -n project-hamster auth can-i create configmap
–as system:serviceaccount:project-hamster:processor
➜ k -n project-hamster auth can-i create pod
–as system:serviceaccount:project-hamster:processor
➜ k -n project-hamster auth can-i delete secret
–as system:serviceaccount:project-hamster:processor
➜ k -n project-hamster auth can-i get configmap
–as system:serviceaccount:project-hamster:processor

Question 11 | DaemonSet on all Nodes

Task weight: 4%

Use context: kubectl config use-context k8s-c1-H

Use Namespace project-tiger for the following. Create a DaemonSet named ds-important with image httpd:2.4-alpine and labels id=ds-important and uuid=18426a0b-5f59-4e10-923f-c0e078e82462. The Pods it creates should request 10 millicore cpu and 10 mebibyte memory. The Pods of that DaemonSet should run on all nodes, master and worker.

Answer:

As of now we aren’t able to create a DaemonSet directly using kubectl, so we create a Deployment and just change it up:
k -n project-tiger create deployment --image=httpd:2.4-alpine ds-important $do > 11.yaml
vim 11.yaml
Then we adjust the yaml to:

apiVersion: apps/v1
kind: DaemonSet                                     # change from Deployment to Daemonset
metadata:creationTimestamp: nulllabels:                                           # addid: ds-important                                # adduuid: 18426a0b-5f59-4e10-923f-c0e078e82462      # addname: ds-importantnamespace: project-tiger                          # important
spec:#replicas: 1                                      # removeselector:matchLabels:id: ds-important                              # adduuid: 18426a0b-5f59-4e10-923f-c0e078e82462    # add#strategy: {}                                     # removetemplate:metadata:creationTimestamp: nulllabels:id: ds-important                            # adduuid: 18426a0b-5f59-4e10-923f-c0e078e82462  # addspec:containers:- image: httpd:2.4-alpinename: ds-importantresources:requests:                                 # addcpu: 10m                                # addmemory: 10Mi                            # addtolerations:                                  # add- effect: NoSchedule                          # addkey: node-role.kubernetes.io/master         # add
#status: {}

k -f 11.yaml create

Question 12 | Deployment on all Nodes

Task weight: 6%

Use context: kubectl config use-context k8s-c1-H

Use Namespace project-tiger for the following. Create a Deployment named deploy-important with label id=very-important (the Pods should also have this label) and 3 replicas. It should contain two containers, the first named container1 with image nginx:1.17.6-alpine and the second one named container2 with image kubernetes/pause.
There should be only ever one Pod of that Deployment running on one worker node. We have two worker nodes: cluster1-worker1 and cluster1-worker2. Because the Deployment has three replicas the result should be that on both nodes one Pod is running. The third Pod won’t be scheduled, unless a new worker node will be added.
In a way we kind of simulate the behaviour of a DaemonSet here, but using a Deployment and a fixed number of replicas.

Answer:

There are two possible ways, one using podAntiAffinity and one using topologySpreadConstraint.

PodAntiAffinity
The idea here is that we create a “Inter-pod anti-affinity” which allows us to say a Pod should only be scheduled on a node where another Pod of a specific label (here the same label) is not already running.
Let’s begin by creating the Deployment template:

k -n project-tiger create deployment
–image=nginx:1.17.6-alpine deploy-important $do > 12.yaml

vim 12.yaml
Then change the yaml to:

apiVersion: apps/v1
kind: Deployment
metadata:creationTimestamp: nulllabels:id: very-important                  # changename: deploy-importantnamespace: project-tiger              # important
spec:replicas: 3                           # changeselector:matchLabels:id: very-important                # changestrategy: {}template:metadata:creationTimestamp: nulllabels:id: very-important              # changespec:containers:- image: nginx:1.17.6-alpinename: container1                # changeresources: {}- image: kubernetes/pause         # addname: container2                # addaffinity:                                             # addpodAntiAffinity:                                    # addrequiredDuringSchedulingIgnoredDuringExecution:   # add- labelSelector:                                  # addmatchExpressions:                             # add- key: id                                     # addoperator: In                                # addvalues:                                     # add- very-important                            # addtopologyKey: kubernetes.io/hostname             # add
status: {}

TopologySpreadConstraints
We can achieve the same with topologySpreadConstraints. Best to try out and play with both.

apiVersion: apps/v1
kind: Deployment
metadata:creationTimestamp: nulllabels:id: very-important                  # changename: deploy-importantnamespace: project-tiger              # important
spec:replicas: 3                           # changeselector:matchLabels:id: very-important                # changestrategy: {}template:metadata:creationTimestamp: nulllabels:id: very-important              # changespec:containers:- image: nginx:1.17.6-alpinename: container1                # changeresources: {}- image: kubernetes/pause         # addname: container2                # addtopologySpreadConstraints:                 # add- maxSkew: 1                               # addtopologyKey: kubernetes.io/hostname      # addwhenUnsatisfiable: DoNotSchedule         # addlabelSelector:                           # addmatchLabels:                           # addid: very-important                   # add
status: {}

Apply and Run
Let’s run it:
k -f 12.yaml create

Question 13 | Multi Containers and Pod shared Volume

Task weight: 4%

Use context: kubectl config use-context k8s-c1-H

Create a Pod named multi-container-playground in Namespace default with three containers, named c1, c2 and c3. There should be a volume attached to that Pod and mounted into every container, but the volume shouldn’t be persisted or shared with other Pods.
Container c1 should be of image nginx:1.17.6-alpine and have the name of the node where its Pod is running available as environment variable MY_NODE_NAME.
Container c2 should be of image busybox:1.31.1 and write the output of the date command every second in the shared volume into file date.log. You can use while true; do date >> /your/vol/path/date.log; sleep 1; done for this.
Container c3 should be of image busybox:1.31.1 and constantly send the content of file date.log from the shared volume to stdout. You can use tail -f /your/vol/path/date.log for this.
Check the logs of container c3 to confirm correct setup.

Answer:

First we create the Pod template:
k run multi-container-playground --image=nginx:1.17.6-alpine $do > 13.yaml

apiVersion: v1
kind: Pod
metadata:creationTimestamp: nulllabels:run: multi-container-playgroundname: multi-container-playground
spec:containers:- image: nginx:1.17.6-alpinename: c1                                                                      # changeresources: {}env:                                                                          # add- name: MY_NODE_NAME                                                          # addvalueFrom:                                                                  # addfieldRef:                                                                 # addfieldPath: spec.nodeName                                                # addvolumeMounts:                                                                 # add- name: vol                                                                   # addmountPath: /vol                                                             # add- image: busybox:1.31.1                                                         # addname: c2                                                                      # addcommand: ["sh", "-c", "while true; do date >> /vol/date.log; sleep 1; done"]  # addvolumeMounts:                                                                 # add- name: vol                                                                   # addmountPath: /vol                                                             # add- image: busybox:1.31.1                                                         # addname: c3                                                                      # addcommand: ["sh", "-c", "tail -f /vol/date.log"]                                # addvolumeMounts:                                                                 # add- name: vol                                                                   # addmountPath: /vol                                                             # adddnsPolicy: ClusterFirstrestartPolicy: Alwaysvolumes:                                                                        # add- name: vol                                                                   # addemptyDir: {}                                                                # add
status: {}

k -f 13.yaml create

Question 14 | Find out Cluster Information

Task weight: 2%

Use context: kubectl config use-context k8s-c1-H

You’re ask to find out following information about the cluster k8s-c1-H:
How many master nodes are available?
How many worker nodes are available?
What is the Service CIDR?
Which Networking (or CNI Plugin) is configured and where is its config file?
Which suffix will static pods have that run on cluster1-worker1?
Write your answers into file /opt/course/14/cluster-info, structured like this:

Answer:

How many master and worker nodes are available?
➜ k get node
What is the Service CIDR?
➜ ssh cluster1-master1
➜ root@cluster1-master1:~# cat /etc/kubernetes/manifests/kube-apiserver.yaml | grep range

Which Networking (or CNI Plugin) is configured and where is its config file?
➜ root@cluster1-master1:~# find /etc/cni/net.d/
➜ root@cluster1-master1:~# cat /etc/cni/net.d/10-weave.conflist

{
“cniVersion”: “0.3.0”,
“name”: “weave”,

Which suffix will static pods have that run on cluster1-worker1?
The suffix is the node hostname with a leading hyphen. It used to be -static in earlier Kubernetes versions.

The resulting
/opt/course/14/cluster-info could look like:
/opt/course/14/cluster-info
How many master nodes are available?
1: 1
How many worker nodes are available?
2: 2
What is the Service CIDR?
3: 10.96.0.0/12
Which Networking (or CNI Plugin) is configured and where is its config file?
4: Weave, /etc/cni/net.d/10-weave.conflist
Which suffix will static pods have that run on cluster1-worker1?
5: -cluster1-worker1

Question 15 | Cluster Event Logging

Task weight: 3%

Use context: kubectl config use-context k8s-c2-AC

Write a command into /opt/course/15/cluster_events.sh which shows the latest events in the whole cluster, ordered by time. Use kubectl for it.
Now kill the kube-proxy Pod running on node cluster2-worker1 and write the events this caused into /opt/course/15/pod_kill.log.
Finally kill the containerd container of the kube-proxy Pod on node cluster2-worker1 and write the events into /opt/course/15/container_kill.log.
Do you notice differences in the events both actions caused?

Answer:

/opt/course/15/cluster_events.sh
kubectl get events -A --sort-by=.metadata.creationTimestamp

Now we kill the kube-proxy Pod:
k -n kube-system get pod -o wide | grep proxy # find pod running on cluster2-worker1
k -n kube-system delete pod kube-proxy-z64cg

Now check the events:
sh /opt/course/15/cluster_events.sh

Write the events the killing caused into
/opt/course/15/pod_kill.log:
kube-system 9s Normal Killing pod/kube-proxy-jsv7t …
kube-system 3s Normal SuccessfulCreate daemonset/kube-proxy …
kube-system Normal Scheduled pod/kube-proxy-m52sx …
default 2s Normal Starting node/cluster2-worker1 …
kube-system 2s Normal Created pod/kube-proxy-m52sx …
kube-system 2s Normal Pulled pod/kube-proxy-m52sx …
kube-system 2s Normal Started pod/kube-proxy-m52sx …
Finally we will try to provoke events by killing the container belonging to the container of the kube-proxy Pod:
➜ ssh cluster2-worker1
root@cluster2-worker1:~# crictl ps | grep kube-proxy
root@cluster2-worker1:~# crictl rm 1e020b43c4423
➜ root@cluster2-worker1:~# crictl ps | grep kube-proxy

Now we see if this caused events again and we write those into the second file:
sh /opt/course/15/cluster_events.sh
/opt/course/15/container_kill.log
kube-system 13s Normal Created pod/kube-proxy-m52sx …
kube-system 13s Normal Pulled pod/kube-proxy-m52sx …
kube-system 13s Normal Started pod/kube-proxy-m52sx …
Comparing the events we see that when we deleted the whole Pod there were more things to be done, hence more events. For example was the DaemonSet in the game to re-create the missing Pod. Where when we manually killed the main container of the Pod, the Pod would still exist but only its container needed to be re-created, hence less events.

Question 16 | Namespaces and Api Resources

Task weight: 2%

Use context: kubectl config use-context k8s-c1-H

Create a new Namespace called cka-master.
Write the names of all namespaced Kubernetes resources (like Pod, Secret, ConfigMap…) into /opt/course/16/resources.txt.
Find the project-* Namespace with the highest number of Roles defined in it and write its name and amount of Roles into /opt/course/16/crowded-namespace.txt.

Answer:

Namespace and Namespaces Resources
We create a new Namespace:
k create ns cka-master
Now we can get a list of all resources like:
k api-resources # shows all
k api-resources -h # help always good
k api-resources --namespaced -o name > /opt/course/16/resources.txt
Namespace with most Roles
➜ k -n project-c13 get role --no-headers | wc -l
➜ k -n project-c14 get role --no-headers | wc -l
➜ k -n project-snake get role --no-headers | wc -l
➜ k -n project-tiger get role --no-headers | wc -l

Finally we write the name and amount into the file:
/opt/course/16/crowded-namespace.txt
project-c14 with 300 resources

Question 17 | Find Container of Pod and check info

Task weight: 3%

Use context: kubectl config use-context k8s-c1-H

In Namespace project-tiger create a Pod named tigers-reunite of image httpd:2.4.41-alpine with labels pod=container and container=pod. Find out on which node the Pod is scheduled. Ssh into that node and find the containerd container belonging to that Pod.
Using command crictl:
Write the ID of the container and the
info.runtimeType into
/opt/course/17/pod-container.txt
Write the logs of the container into
/opt/course/17/pod-container.log

Answer:

First we create the Pod:
k -n project-tiger run tigers-reunite --image=httpd:2.4.41-alpine --labels “pod=container,container=pod”
Next we find out the node it’s scheduled on:
k -n project-tiger get pod -o wide
or fancy:
k -n project-tiger get pod tigers-reunite -o jsonpath=“{.spec.nodeName}”
Then we ssh into that node and and check the container info:
➜ ssh cluster1-worker2
➜ root@cluster1-worker2:~# crictl ps | grep tigers-reunite
b01edbe6f89ed 54b0995a63052 5 seconds ago Running tigers-reunite …
➜ root@cluster1-worker2:~# crictl inspect b01edbe6f89ed | grep runtimeType
“runtimeType”: “io.containerd.runc.v2”,
Then we fill the requested file (on the main terminal):
/opt/course/17/pod-container.txt
b01edbe6f89ed io.containerd.runc.v2
Finally we write the container logs in the second file:
ssh cluster1-worker2 ‘crictl logs b01edbe6f89ed’ &> /opt/course/17/pod-container.log

Question 18 | Fix Kubelet

Task weight: 8%

Use context: kubectl config use-context k8s-c3-CCC

There seems to be an issue with the kubelet not running on cluster3-worker1. Fix it and confirm that cluster has node cluster3-worker1 available in Ready state afterwards. You should be able to schedule a Pod on cluster3-worker1 afterwards.
Write the reason of the issue into /opt/course/18/reason.txt.

Answer:

The procedure on tasks like these should be to check if the kubelet is running, if not start it, then check its logs and correct errors if there are some.
Always helpful to check if other clusters already have some of the components defined and running, so you can copy and use existing config files. Though in this case it might not need to be necessary.
Check node status:
➜ k get node
NAME STATUS ROLES AGE VERSION
cluster3-master1 Ready master 27h v1.23.1
cluster3-worker1 NotReady 26h v1.23.1
First we check if the kubelet is running:
➜ ssh cluster3-worker1
➜ root@cluster3-worker1:~# ps aux | grep kubelet
root 29294 0.0 0.2 14856 1016 pts/0 S+ 11:30 0:00 grep --color=auto kubelet
Nope, so we check if its configured using systemd as service:
➜ root@cluster3-worker1:~# service kubelet status
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: inactive (dead) since Sun 2019-12-08 11:30:06 UTC; 50min 52s ago

Yes, its configured as a service with config at
/etc/systemd/system/kubelet.service.d/10-kubeadm.conf, but we see its inactive. Let’s try to start it:
➜ root@cluster3-worker1:~# service kubelet start
➜ root@cluster3-worker1:~# service kubelet status
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: activating (auto-restart) (Result: exit-code) since Thu 2020-04-30 22:03:10 UTC; 3s ago
Docs: https://kubernetes.io/docs/home/
Process: 5989 ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=203/EXEC)
Main PID: 5989 (code=exited, status=203/EXEC)

Apr 30 22:03:10 cluster3-worker1 systemd[5989]: kubelet.service: Failed at step EXEC spawning /usr/local/bin/kubelet: No such file or directory
Apr 30 22:03:10 cluster3-worker1 systemd[1]: kubelet.service: Main process exited, code=exited, status=203/EXEC
Apr 30 22:03:10 cluster3-worker1 systemd[1]: kubelet.service: Failed with result ‘exit-code’.
We see its trying to execute
/usr/local/bin/kubelet with some parameters defined in its service config file. A good way to find errors and get more logs is to run the command manually (usually also with its parameters).
➜ root@cluster3-worker1:~# /usr/local/bin/kubelet
-bash: /usr/local/bin/kubelet: No such file or directory
➜ root@cluster3-worker1:~# whereis kubelet
kubelet: /usr/bin/kubelet

Another way would be to see the extended logging of a service like using
journalctl -u kubelet.
Well, there we have it, wrong path specified. Correct the path in file
/etc/systemd/system/kubelet.service.d/10-kubeadm.conf and run:
vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf # fix
systemctl daemon-reload && systemctl restart kubelet
systemctl status kubelet # should now show running

Finally we write the reason into the file:
/opt/course/18/reason.txt
wrong path to kubelet binary specified in service config

Question 19 | Create Secret and mount into Pod

Task weight: 3%

Use context: kubectl config use-context k8s-c3-CCC

Do the following in a new Namespace secret. Create a Pod named secret-pod of image busybox:1.31.1 which should keep running for some time. It should be able to run on master nodes as well, create the proper toleration.
There is an existing Secret located at /opt/course/19/secret1.yaml, create it in the secret Namespace and mount it readonly into the Pod at /tmp/secret1.
Create a new Secret in Namespace secret called secret2 which should contain user=user1 and pass=1234. These entries should be available inside the Pod’s container as environment variables APP_USER and APP_PASS.
Confirm everything is working.

Answer

First we create the Namespace and the requested Secrets in it:
k create ns secret
cp /opt/course/19/secret1.yaml 19_secret1.yaml
vim 19_secret1.yaml

We need to adjust the Namespace for that Secret:

apiVersion: v1
data:halt: IyEgL2Jpbi9zaAo...
kind: Secret
metadata:creationTimestamp: nullname: secret1namespace: secret           # change

k -f 19_secret1.yaml create
Next we create the second Secret:
k -n secret create secret generic secret2 --from-literal=user=user1 --from-literal=pass=1234
Now we create the Pod template:
k -n secret run secret-pod --image=busybox:1.31.1 $do – sh -c “sleep 5d” > 19.yaml
vim 19.yaml
Then make the necessary changes:

apiVersion: v1
kind: Pod
metadata:creationTimestamp: nulllabels:run: secret-podname: secret-podnamespace: secret                       # add
spec:tolerations:                            # add- effect: NoSchedule                    # addkey: node-role.kubernetes.io/master   # addcontainers:- args:- sh- -c- sleep 1dimage: busybox:1.31.1name: secret-podresources: {}env:                                  # add- name: APP_USER                      # addvalueFrom:                          # addsecretKeyRef:                     # addname: secret2                   # addkey: user                       # add- name: APP_PASS                      # addvalueFrom:                          # addsecretKeyRef:                     # addname: secret2                   # addkey: pass                       # addvolumeMounts:                         # add- name: secret1                       # addmountPath: /tmp/secret1             # addreadOnly: true                      # adddnsPolicy: ClusterFirstrestartPolicy: Alwaysvolumes:                                # add- name: secret1                         # addsecret:                               # addsecretName: secret1                 # add
status: {}

It might not be necessary in current K8s versions to specify the readOnly: true because it’s the default setting anyways.
And execute:
k -f 19.yaml create
Finally we check if all is correct:
➜ k -n secret exec secret-pod – env | grep APP
➜ k -n secret exec secret-pod – find /tmp/secret1
➜ k -n secret exec secret-pod – cat /tmp/secret1/halt

Question 20 | Update Kubernetes Version and join cluster

Task weight: 10%

Use context: kubectl config use-context k8s-c3-CCC

Your coworker said node cluster3-worker2 is running an older Kubernetes version and is not even part of the cluster. Update Kubernetes on that node to the exact version that’s running on cluster3-master1. Then add this node to the cluster. Use kubeadm for this.

Answer:

Upgrade Kubernetes to cluster3-master1 version
Search in the docs for kubeadm upgrade: https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade
➜ k get node
NAME STATUS ROLES AGE VERSION
cluster3-master1 Ready control-plane,master 116m v1.23.1
cluster3-worker1 NotReady 112m v1.23.1
Master node seems to be running Kubernetes 1.23.1 and cluster3-worker2 is not yet part of the cluster.
➜ ssh cluster3-worker2
➜ root@cluster3-worker2:~# kubeadm version

kubeadm version: &version.Info{Major:“1”, Minor:“23”, GitVersion:“v1.23.1”, GitCommit:“86ec240af8cbd1b60bcc4c03c20da9b98005b92e”, GitTreeState:“clean”, BuildDate:“2021-12-16T11:39:51Z”, GoVersion:“go1.17.5”, Compiler:“gc”, Platform:“linux/amd64”}

➜ root@cluster3-worker2:~# kubectl version
Client Version: version.Info{Major:“1”, Minor:“22”, GitVersion:“v1.22.4”, GitCommit:“b695d79d4f967c403a96986f1750a35eb75e75f1”, GitTreeState:“clean”, BuildDate:“2021-11-17T15:48:33Z”, GoVersion:“go1.16.10”, Compiler:“gc”, Platform:“linux/amd64”}
The connection to the server localhost:8080 was refused - did you specify the right host or port?

➜ root@cluster3-worker2:~# kubelet --version
Kubernetes v1.22.4
Here kubeadm is already installed in the wanted version, so we can run:
➜ root@cluster3-worker2:~# kubeadm upgrade node
couldn’t create a Kubernetes client from file “/etc/kubernetes/kubelet.conf”: failed to load admin kubeconfig: open /etc/kubernetes/kubelet.conf: no such file or directory
To see the stack trace of this error execute with --v=5 or higher
This is usually the proper command to upgrade a node. But this error means that this node was never even initialised, so nothing to update here. This will be done later using
kubeadm join. For now we can continue with kubelet and kubectl:
➜ root@cluster3-worker2:~# apt update
➜ root@cluster3-worker2:~# apt show kubectl -a | grep 1.23
➜ root@cluster3-worker2:~# apt install kubectl=1.23.1-00 kubelet=1.23.1-00
➜ root@cluster3-worker2:~# kubelet --version

Now we’re up to date with kubeadm, kubectl and kubelet. Restart the kubelet:
➜ root@cluster3-worker2:~# systemctl restart kubelet
➜ root@cluster3-worker2:~# service kubelet status

Add cluster3-master2 to cluster
First we log into the master1 and generate a new TLS bootstrap token, also printing out the join command:
➜ ssh cluster3-master1
➜ root@cluster3-master1:~# kubeadm token create --print-join-command
kubeadm join 192.168.100.31:6443 --token leqq1l.1hlg4rw8mu7brv73 --discovery-token-ca-cert-hash sha256:2e2c3407a256fc768f0d8e70974a8e24d7b9976149a79bd08858c4d7aa2ff79a
➜ root@cluster3-master1:~# kubeadm token list
TOKEN TTL EXPIRES …
mnkpfu.d2lpu8zypbyumr3i 23h 2020-05-01T22:43:45Z …
poa13f.hnrs6i6ifetwii75 …
We see the expiration of 23h for our token, we could adjust this by passing the ttl argument.
Next we connect again to worker2 and simply execute the join command:
➜ ssh cluster3-worker2
➜ root@cluster3-worker2:~# kubeadm join 192.168.100.31:6443 --token leqq1l.1hlg4rw8mu7brv73 --discovery-token-3c9cf14535ebfac8a23a91132b75436b36df2c087aa99c433f79d531

If you have troubles with kubeadm join you might need to run kubeadm reset.
This looks great though for us. Finally we head back to the main terminal and check the node status:
➜ k get node
We see
cluster3-worker2 is now available and up to date.

Question 21 | Create a Static Pod and Service

Task weight: 2%

Use context: kubectl config use-context k8s-c3-CCC

Create a Static Pod named my-static-pod in Namespace default on cluster3-master1. It should be of image nginx:1.16-alpine and have resource requests for 10m CPU and 20Mi memory.
Then create a NodePort Service named static-pod-service which exposes that static Pod on port 80 and check if it has Endpoints and if its reachable through the cluster3-master1 internal IP address. You can connect to the internal node IPs from your main terminal.

Answer:

➜ ssh cluster3-master1
➜ root@cluster1-master1:~# cd /etc/kubernetes/manifests/
➜ root@cluster1-master1:~# kubectl run my-static-pod
–image=nginx:1.16-alpine
-o yaml --dry-run=client > my-static-pod.yaml

Then edit the
my-static-pod.yaml to add the requested resource requests:
/etc/kubernetes/manifests/my-static-pod.yaml

apiVersion: v1
kind: Pod
metadata:creationTimestamp: nulllabels:run: my-static-podname: my-static-pod
spec:containers:- image: nginx:1.16-alpinename: my-static-podresources:requests:cpu: 10mmemory: 20MidnsPolicy: ClusterFirstrestartPolicy: Always
status: {}

And make sure its running:
➜ k get pod -A | grep my-static
Now we expose that static Pod:
k expose pod my-static-pod-cluster3-master1
–name static-pod-service
–type=NodePort
–port 80

This would generate a Service like:
kubectl expose pod my-static-pod-cluster3-master1 --name static-pod-service --type=NodePort --port 80

apiVersion: v1
kind: Service
metadata:creationTimestamp: nulllabels:run: my-static-podname: static-pod-service
spec:ports:- port: 80protocol: TCPtargetPort: 80selector:run: my-static-podtype: NodePort
status:loadBalancer: {}

Then run and test:
➜ k get svc,ep -l run=my-static-pod

Question 22 | Check how long certificates are valid

Task weight: 2%

Use context: kubectl config use-context k8s-c2-AC

Check how long the kube-apiserver server certificate is valid on cluster2-master1. Do this with openssl or cfssl. Write the exipiration date into /opt/course/22/expiration.
Also run the correct kubeadm command to list the expiration dates and confirm both methods show the same date.
Write the correct kubeadm command that would renew the apiserver server certificate into /opt/course/22/kubeadm-renew-certs.sh.

Answer:

First let’s find that certificate:
➜ ssh cluster2-master1
➜ root@cluster2-master1:~# find /etc/kubernetes/pki | grep apiserver
/etc/kubernetes/pki/apiserver.crt
/etc/kubernetes/pki/apiserver-etcd-client.crt
/etc/kubernetes/pki/apiserver-etcd-client.key
/etc/kubernetes/pki/apiserver-kubelet-client.crt
/etc/kubernetes/pki/apiserver.key
/etc/kubernetes/pki/apiserver-kubelet-client.key
Next we use openssl to find out the expiration date:
➜ root@cluster2-master1:~# openssl x509 -noout -text -in
/etc/kubernetes/pki/apiserver.crt | grep Validity -A2

Validity
Not Before: Jan 14 18:18:15 2021 GMT
Not After : Jan 14 18:49:40 2022 GMT
There we have it, so we write it in the required location on our main terminal:
/opt/course/22/expiration
Jan 14 18:49:40 2022 GMT
And we use the feature from kubeadm to get the expiration too:
➜ root@cluster2-master1:~# kubeadm certs check-expiration | grep apiserver
apiserver Jan 14, 2022 18:49 UTC 363d ca no
apiserver-etcd-client Jan 14, 2022 18:49 UTC 363d etcd-ca no
apiserver-kubelet-client Jan 14, 2022 18:49 UTC 363d ca no
Looking good. And finally we write the command that would renew all certificates into the requested location:
/opt/course/22/kubeadm-renew-certs.sh
kubeadm certs renew apiserver

Question 23 | Kubelet client/server cert info

Task weight: 2%

Use context: kubectl config use-context k8s-c2-AC

Node cluster2-worker1 has been added to the cluster using kubeadm and TLS bootstrapping.
Find the “Issuer” and “Extended Key Usage” values of the cluster2-worker1:
kubelet client certificate, the one used for outgoing connections to the kube-apiserver.
kubelet server certificate, the one used for incoming connections from the kube-apiserver.
Write the information into file /opt/course/23/certificate-info.txt.
Compare the “Issuer” and “Extended Key Usage” fields of both certificates and make sense of these.

Answer:

To find the correct kubelet certificate directory, we can look for the default value of the --cert-dir parameter for the kubelet. For this search for “kubelet” in the Kubernetes docs which will lead to: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet. We can check if another certificate directory has been configured using ps aux or in /etc/systemd/system/kubelet.service.d/10-kubeadm.conf.
First we check the kubelet client certificate:
➜ ssh cluster2-worker1
➜ root@cluster2-worker1:~# openssl x509 -noout -text -in /var/lib/kubelet/pki/kubelet-client-current.pem | grep Issuer
Issuer: CN = kubernetes

➜ root@cluster2-worker1:~# openssl x509 -noout -text -in /var/lib/kubelet/pki/kubelet-client-current.pem | grep “Extended Key Usage” -A1
X509v3 Extended Key Usage:
TLS Web Client Authentication
Next we check the kubelet server certificate:
➜ root@cluster2-worker1:~# openssl x509 -noout -text -in
/var/lib/kubelet/pki/kubelet.crt | grep Issuer

Issuer: CN = cluster2-worker1-ca@1588186506

➜ root@cluster2-worker1:~# openssl x509 -noout -text -in /var/lib/kubelet/pki/kubelet.crt | grep “Extended Key Usage” -A1
X509v3 Extended Key Usage:
TLS Web Server Authentication
We see that the server certificate was generated on the worker node itself and the client certificate was issued by the Kubernetes api. The “Extended Key Usage” also shows if its for client or server authentication.
More about this: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping

Question 24 | NetworkPolicy

Task weight: 9%

Use context: kubectl config use-context k8s-c1-H

There was a security incident where an intruder was able to access the whole cluster from a single hacked backend Pod.
To prevent this create a NetworkPolicy called np-backend in Namespace project-snake. It should allow the backend-* Pods only to:
connect to
db1-* Pods on port 1111
connect to
db2-* Pods on port 2222
Use the app label of Pods in your policy.
After implementation, connections from backend-* Pods to vault-* Pods on port 3333 should for example no longer work.

Answer:

First we look at the existing Pods and their labels:
➜ k -n project-snake get pod
NAME READY STATUS RESTARTS AGE
backend-0 1/1 Running 0 8s
db1-0 1/1 Running 0 8s
db2-0 1/1 Running 0 10s
vault-0 1/1 Running 0 10s
➜ k -n project-snake get pod -L app
NAME READY STATUS RESTARTS AGE APP
backend-0 1/1 Running 0 3m15s backend
db1-0 1/1 Running 0 3m15s db1
db2-0 1/1 Running 0 3m17s db2
vault-0 1/1 Running 0 3m17s vault
We test the current connection situation and see nothing is restricted:
➜ k -n project-snake get pod -o wide
NAME READY STATUS RESTARTS AGE IP …
backend-0 1/1 Running 0 4m14s 10.44.0.24 …
db1-0 1/1 Running 0 4m14s 10.44.0.25 …
db2-0 1/1 Running 0 4m16s 10.44.0.23 …
vault-0 1/1 Running 0 4m16s 10.44.0.22 …

➜ k -n project-snake exec backend-0 – curl -s 10.44.0.25:1111
database one

➜ k -n project-snake exec backend-0 – curl -s 10.44.0.23:2222
database two

➜ k -n project-snake exec backend-0 – curl -s 10.44.0.22:3333
vault secret storage
Now we create the NP by copying and chaning an example from the k8s docs:
vim 24_np.yaml

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:name: np-backendnamespace: project-snake
spec:podSelector:matchLabels:app: backendpolicyTypes:- Egress                    # policy is only about Egressegress:-                           # first ruleto:                           # first condition "to"- podSelector:matchLabels:app: db1ports:                        # second condition "port"- protocol: TCPport: 1111-                           # second ruleto:                           # first condition "to"- podSelector:matchLabels:app: db2ports:                        # second condition "port"- protocol: TCPport: 2222

The NP above has two rules with two conditions each, it can be read as:
allow outgoing traffic if:
(destination pod has label app=db1 AND port is 1111)
OR
(destination pod has label app=db2 AND port is 2222)

Wrong example(上面是正确示例,这里是给出的错误示例)
Now let’s shortly look at a wrong example:
WRONG

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:name: np-backendnamespace: project-snake
spec:podSelector:matchLabels:app: backendpolicyTypes:- Egressegress:-                           # first ruleto:                           # first condition "to"- podSelector:                    # first "to" possibilitymatchLabels:app: db1- podSelector:                    # second "to" possibilitymatchLabels:app: db2ports:                        # second condition "ports"- protocol: TCP                   # first "ports" possibilityport: 1111- protocol: TCP                   # second "ports" possibilityport: 2222

The NP above has one rule with two conditions and two condition-entries each, it can be read as:
allow outgoing traffic if:
(destination pod has label app=db1 OR destination pod has label app=db2)
AND
(destination port is 1111 OR destination port is 2222)

Using this NP it would still be possible for backend-* Pods to connect to db2-* Pods on port 1111 for example which should be forbidden.

Create NetworkPolicy
We create the correct NP:
k -f 24_np.yaml create
And test again:
➜ k -n project-snake exec backend-0 – curl -s 10.44.0.25:1111
database one

➜ k -n project-snake exec backend-0 – curl -s 10.44.0.23:2222
database two

➜ k -n project-snake exec backend-0 – curl -s 10.44.0.22:3333
^C
Also helpful to use kubectl describe on the NP to see how k8s has interpreted the policy.
Great, looking more secure. Task done.

Question 25 | Etcd Snapshot Save and Restore

Task weight: 8%

Use context: kubectl config use-context k8s-c3-CCC

Make a backup of etcd running on cluster3-master1 and save it on the master node at /tmp/etcd-backup.db.
Then create a Pod of your kind in the cluster.
Finally restore the backup, confirm the cluster is still working and that the created Pod is no longer with us.

Answer:

Etcd Backup
First we log into the master and try to create a snapshop of etcd:
➜ ssh cluster3-master1
➜ root@cluster3-master1:~# ETCDCTL_API=3 etcdctl snapshot save /tmp/etcd-backup.db
Error: rpc error: code = Unavailable desc = transport is closing
But it fails because we need to authenticate ourselves. For the necessary information we can check the etc manifest:
➜ root@cluster3-master1:~# vim /etc/kubernetes/manifests/etcd.yaml
We only check the
etcd.yaml for necessary information we don’t change it.
/etc/kubernetes/manifests/etcd.yaml

apiVersion: v1
kind: Pod
metadata:creationTimestamp: nulllabels:component: etcdtier: control-planename: etcdnamespace: kube-system
spec:containers:- command:- etcd- --advertise-client-urls=https://192.168.100.31:2379- --cert-file=/etc/kubernetes/pki/etcd/server.crt                           # use- --client-cert-auth=true- --data-dir=/var/lib/etcd- --initial-advertise-peer-urls=https://192.168.100.31:2380- --initial-cluster=cluster3-master1=https://192.168.100.31:2380- --key-file=/etc/kubernetes/pki/etcd/server.key                            # use- --listen-client-urls=https://127.0.0.1:2379,https://192.168.100.31:2379   # use- --listen-metrics-urls=http://127.0.0.1:2381- --listen-peer-urls=https://192.168.100.31:2380- --name=cluster3-master1- --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt- --peer-client-cert-auth=true- --peer-key-file=/etc/kubernetes/pki/etcd/peer.key- --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt                    # use- --snapshot-count=10000- --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crtimage: k8s.gcr.io/etcd:3.3.15-0imagePullPolicy: IfNotPresentlivenessProbe:failureThreshold: 8httpGet:host: 127.0.0.1path: /healthport: 2381scheme: HTTPinitialDelaySeconds: 15timeoutSeconds: 15name: etcdresources: {}volumeMounts:- mountPath: /var/lib/etcdname: etcd-data- mountPath: /etc/kubernetes/pki/etcdname: etcd-certshostNetwork: truepriorityClassName: system-cluster-criticalvolumes:- hostPath:path: /etc/kubernetes/pki/etcdtype: DirectoryOrCreatename: etcd-certs- hostPath:path: /var/lib/etcd                                                     # importanttype: DirectoryOrCreatename: etcd-data
status: {}

But we also know that the api-server is connecting to etcd, so we can check how its manifest is configured:
➜ root@cluster3-master1:~# cat /etc/kubernetes/manifests/kube-apiserver.yaml | grep etcd
- --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
- --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
- --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
- --etcd-servers=https://127.0.0.1:2379

We use the authentication information and pass it to etcdctl:
➜ root@cluster3-master1:~# ETCDCTL_API=3 etcdctl snapshot save /tmp/etcd-backup.db
–cacert /etc/kubernetes/pki/etcd/ca.crt
–cert /etc/kubernetes/pki/etcd/server.crt
–key /etc/kubernetes/pki/etcd/server.key

Snapshot saved at /tmp/etcd-backup.db
NOTE: Dont use snapshot status because it can alter the snapshot file and render it invalid
Etcd restore
Now create a Pod in the cluster and wait for it to be running:
➜ root@cluster3-master1:~# kubectl run test --image=nginx
➜ root@cluster3-master1:~# kubectl get pod -l run=test -w
NOTE: If you didn’t solve questions 18 or 20 and cluster3 doesn’t have a ready worker node then the created pod might stay in a Pending state. This is still ok for this task.
Next we stop all controlplane components:
root@cluster3-master1:~# cd /etc/kubernetes/manifests/
root@cluster3-master1:/etc/kubernetes/manifests# mv * …
root@cluster3-master1:/etc/kubernetes/manifests# watch crictl ps
Now we restore the snapshot into a specific directory:
➜ root@cluster3-master1:~# ETCDCTL_API=3 etcdctl snapshot restore /tmp/etcd-backup.db
–data-dir /var/lib/etcd-backup
–cacert /etc/kubernetes/pki/etcd/ca.crt
–cert /etc/kubernetes/pki/etcd/server.crt
–key /etc/kubernetes/pki/etcd/server.key

2020-09-04 16:50:19.650804 I | mvcc: restore compact to 9935
2020-09-04 16:50:19.659095 I | etcdserver/membership: added member 8e9e05c52164694d [http://localhost:2380] to cluster cdf818194e3a8c32
We could specify another host to make the backup from by using etcdctl --endpoints http://IP, but here we just use the default value which is: http://127.0.0.1:2379,http://127.0.0.1:4001.
The restored files are located at the new folder /var/lib/etcd-backup, now we have to tell etcd to use that directory:
➜ root@cluster3-master1:~# vim /etc/kubernetes/etcd.yaml

apiVersion: v1
kind: Pod
metadata:creationTimestamp: nulllabels:component: etcdtier: control-planename: etcdnamespace: kube-system
spec:
...- mountPath: /etc/kubernetes/pki/etcdname: etcd-certshostNetwork: truepriorityClassName: system-cluster-criticalvolumes:- hostPath:path: /etc/kubernetes/pki/etcdtype: DirectoryOrCreatename: etcd-certs- hostPath:path: /var/lib/etcd-backup                # changetype: DirectoryOrCreatename: etcd-data
status: {}

Now we move all controlplane yaml again into the manifest directory. Give it some time (up to several minutes) for etcd to restart and for the api-server to be reachable again:
root@cluster3-master1:/etc/kubernetes/manifests# mv …/*.yaml .
root@cluster3-master1:/etc/kubernetes/manifests# watch crictl ps
Then we check again for the Pod:
➜ root@cluster3-master1:~# kubectl get pod -l run=test

2022年7月份模拟考题解答相关推荐

  1. 2022年7月份模拟考题-附加题解答

    2022年7月份模拟考题-附加题解答 Extra Question 1 | Find Pods first to be terminated Use context: kubectl config u ...

  2. 2022年煤矿探放水考题及模拟考试

    题库来源:安全生产模拟考试一点通公众号小程序 2022年煤矿探放水考试题为煤矿探放水复习题的新全考试题型!2022年煤矿探放水考题及模拟考试依据煤矿探放水最新教材汇编.煤矿探放水考试题目随时根据安全生 ...

  3. 2022年流动式起重机司机考题及模拟考试

    题库来源:安全生产模拟考试一点通公众号小程序 2022年流动式起重机司机上岗证题目为流动式起重机司机考试100题上机考试练习题!2022年流动式起重机司机考题及模拟考试依据流动式起重机司机新版教材大纲 ...

  4. 2022年安全员-A证考题及在线模拟考试

    题库来源:安全生产模拟考试一点通公众号小程序 2022年安全员-A证操作证考试题为安全员-A证考试100题的新全考试题型!2022年安全员-A证考题及在线模拟考试依据安全员-A证最新教材汇编.安全员- ...

  5. 2022上海市安全员C证考题模拟考试平台操作

    题库来源:安全生产模拟考试一点通公众号小程序 2022年上海市安全员C证上岗证题目为上海市安全员C证判断题新版习题库!2022上海市安全员C证考题模拟考试平台操作根据上海市安全员C证新考试大纲.上海市 ...

  6. 2022起重机司机(限桥式起重机)考题及在线模拟考试

    题库来源:安全生产模拟考试一点通公众号小程序 2022起重机司机(限桥式起重机)理论题库是起重机司机(限桥式起重机)考试题库高频考题覆盖!2022起重机司机(限桥式起重机)考题及在线模拟考试依据起重机 ...

  7. 2022年化工自动化控制仪表考题模拟考试平台操作

    题库来源:安全生产模拟考试一点通公众号小程序 2022年化工自动化控制仪表复训题库系化工自动化控制仪表理论题库仿真模拟预测!2022年化工自动化控制仪表考题模拟考试平台操作根据化工自动化控制仪表新考试 ...

  8. 2022年流动式起重机司机考题及在线模拟考试

    题库来源:安全生产模拟考试一点通公众号小程序 2022流动式起重机司机考试题是流动式起重机司机操作证考试题新版教材大纲题库!2022年流动式起重机司机考题及在线模拟考试依据流动式起重机司机新版教材大纲 ...

  9. 2022年起重机械指挥考题及模拟考试

    题库来源:安全生产模拟考试一点通公众号小程序 2022起重机械指挥考试题是起重机械指挥操作证考试题模拟预测卷!2022年起重机械指挥考题及模拟考试根据起重机械指挥考试大纲.起重机械指挥试题通过安全生产 ...

最新文章

  1. Gson源码解析和它的设计模式
  2. 八数码问题及A*算法
  3. win7怎么去除快捷方式的小箭头
  4. 开源 java CMS - FreeCMS2.8 栏目页静态化参数
  5. Java 笔试题集锦
  6. findviewbyid找不到id_上班找车位很难吧?看看这波操作……
  7. 【Linux系统编程】线程同步与互斥:互斥锁
  8. 2019.04.09 电商25 结算功能1
  9. C中静态存储区和动态存储区
  10. 哈工大理论力学第八版电子版_理论力学哈工大第八版1第一章思考题课后题
  11. Moscow Pre-Finals Workshop 2020 - Legilimens+Coffee Chicken Contest (XX Open Cup, Grand Prix of Nanj
  12. 超级详细的软件著作权登记所需的软件说明书撰写模板及步骤
  13. 分享多款从淘宝购买的EXCEL进销存模板,可直接用于小企业的进销存管理与仓库管理
  14. 超级符号就是超级创意_超级食物
  15. android手机rom物理存储器,手机ROM/RAM的区别
  16. 疑难杂症:系统雪崩到底是为什么
  17. 一个善意的谎言拯救一个团队 (又叫沙漠中的指南针)
  18. 求(2Y-4)²-4(Y-2)(3Y+7)≥0得解
  19. C++基础知识—— 基本输入输出
  20. 800-C++ throw(抛出异常)详解

热门文章

  1. win7的IE图标不见了 怎么找回?
  2. python字符串函数用法大全
  3. java计算机毕业设计体检系统源码+数据库+系统+lw文档+mybatis+运行部署
  4. Http代理前后的不同之处
  5. UESTC 2014 Summer Training #7 Div.2
  6. 使用HTML(Web)开发iOS/iPhone/iPad应用
  7. python中的汉字编码(转载,已经注明转载地址)
  8. android .reset(),Android驱动笔记(13)——PMIC reset介绍
  9. mac MAMP使用
  10. 38:计算多项式的导函数