CKA 模拟真题 Killer.sh Extra | Question 2 | Curl Manually Contact API

Use context: kubectl config use-context k8s-c1-H

There is an existing ServiceAccount secret-reader in Namespace project-hamster. Create a Pod of image curlimages/curl:7.65.3 named tmp-api-contact which uses this ServiceAccount. Make sure the container keeps running.

Exec into the Pod and use curl to access the Kubernetes Api of that cluster manually, listing all available secrets. You can ignore insecure https connection. Write the command(s) for this into file /opt/course/e4/list-secrets.sh .


译文:

在namepace project-hamster 中有一个现有的ServiceAccount secret-reader 。创建一个名为 tmp-api-contact 的pod, 镜像为 curl:7.65.3 ,使用这个ServiceAccount。确保该容器持续运行。

在pod中,使用curl手动访问该集群的Kubernetes Api,列出所有可用的secret。你可以忽略不安全的HTTPS连接。把这个命令写进文件 /opt/course/e4/list-secrets.sh

继续阅读“CKA 模拟真题 Killer.sh Extra | Question 2 | Curl Manually Contact API”

CKA 模拟真题 Killer.sh | Extra Question 1 | Find Pods first to be terminated

Use context: kubectl config use-context k8s-c1-H

Check all available Pods in the Namespace project-c13 and find the names of those that would probably be terminated first if the nodes run out of resources (cpu or memory) to schedule all Pods. Write the Pod names into /opt/course/e1/pods-not-stable.txt.


译文:

检查名称空间 project-c13 中所有可用的Pod,找到那些在节点资源(cpu或内存)耗尽的情况下可能会首先终止的Pod的名称,以安排所有Pod。把Pod的名字写进 /opt/course/e1/pods-not-stable.txt

继续阅读“CKA 模拟真题 Killer.sh | Extra Question 1 | Find Pods first to be terminated”

CKA 模拟真题 Killer.sh | Preview Question 3

Use context: kubectl config use-context k8s-c2-AC

Create a Pod named check-ip in Namespace default using image httpd:2.4.41-alpine. Expose it on port 80 as a ClusterIP Service named check-ip-service. Remember/output the IP of that Service.

Change the Service CIDR to 11.96.0.0/12 for the cluster.

Then create a second Service named check-ip-service2 pointing to the same Pod to check if your settings did take effect. Finally check if the IP of the first Service has changed.


译文:

在命名空间default创建一个名为 check-ip 的Pod, 镜像使用 httpd:2.4.41-alpine 。将其作为一个名为 check-ip-service 的ClusterIP服务暴露在80端口。记住/output该服务的IP。

为集群改变服务的CIDR为 11.96.0.0/12

然后创建第二个名为 check-ip-service2 的服务,指向同一个Pod,检查你的设置是否生效。最后检查第一个服务的IP是否有变化。

继续阅读“CKA 模拟真题 Killer.sh | Preview Question 3”

CKA 模拟真题 Killer.sh | Preview Question 2

Use context: kubectl config use-context k8s-c1-H

You're asked to confirm that kube-proxy is running correctly on all nodes. For this perform the following in Namespace project-hamster :

Create a new Pod named p2-pod with two containers, one of image nginx:1.21.3-alpine and one of image busybox:1.31 . Make sure the busybox container keeps running for some time.

Create a new Service named p2-service which exposes that Pod internally in the cluster on port 3000->80.

Find the kube-proxy container on all nodes cluster1-controlplane1 , cluster1-node1 and cluster1-node2 and make sure that it's using iptables. Use command crictl for this.

Write the iptables rules of all nodes belonging the created Service p2-service into file /opt/course/p2/iptables.txt .

Finally delete the Service and confirm that the iptables rules are gone from all nodes.


译文:

你被要求确认kube-proxy在所有节点上都正确运行。为此,在名称空间 project-hamster 中执行以下操作。

创建一个名为 p2-pod 的新Pod,有两个容器,一个是 nginx:1.21.3-alpine 的镜像,一个是 busybox:1.31*** 的镜像。确保busybox容器持续运行一段时间。

创建一个名为 p2-service 的新服务,在集群中通过3000->80端口向内部公开该Pod。

在所有节点 cluster1-controlplane1cluster1-node1cluster1-node2 上找到kube-proxy容器,并确保它正在使用iptables。为此使用 crictl 命令。

把属于已创建的服务 p2-service 的所有节点的iptables规则写进文件 /opt/course/p2/iptables.txt

最后删除该服务,并确认所有节点的iptables规则已经消失。

继续阅读“CKA 模拟真题 Killer.sh | Preview Question 2”

CKA 模拟真题 Killer.sh | Preview Question 1

Use context: kubectl config use-context k8s-c2-AC

The cluster admin asked you to find out the following information about etcd running on cluster2-controlplane1:

Server private key location
Server certificate expiration date
Is client certificate authentication enabled
Write these information into /opt/course/p1/etcd-info.txt

Finally you're asked to save an etcd snapshot at /etc/etcd-snapshot.db on cluster2-controlplane1 and display its status.


译文:

集群管理员要求你找出关于在cluster2-controlplane1上运行的etcd的以下信息。

  • 服务器私钥位置
  • 服务器证书的到期日
  • 是否启用了客户证书认证

将这些信息写入 /opt/course/p1/etcd-info.txt

最后要求你在 /etc/etcd-snapshot.db 上保存cluster2-controlplane1的etcd快照,并显示其状态。

继续阅读“CKA 模拟真题 Killer.sh | Preview Question 1”

CKA 模拟真题 Killer.sh | Question 25 | Etcd Snapshot Save and Restore

Task weight: 8%

Use context: kubectl config use-context k8s-c3-CCC

Make a backup of etcd running on cluster3-controlplane1 and save it on the controlplane node at /tmp/etcd-backup.db .

Then create a Pod of your kind in the cluster.

Finally restore the backup, confirm the cluster is still working and that the created Pod is no longer with us.


译文:

对在cluster3-controlplane1上运行的etcd做一个备份,并将其保存在控制机节点上的 /tmp/etcd-backup.db

然后在集群中创建一个你喜欢的Pod。

最后恢复备份,确认集群仍在工作,并且创建的Pod已经不在我们身边。

继续阅读“CKA 模拟真题 Killer.sh | Question 25 | Etcd Snapshot Save and Restore”

CKA 模拟真题 Killer.sh | Question 24 | NetworkPolicy

Task weight: 9%

Use context: kubectl config use-context k8s-c1-H

There was a security incident where an intruder was able to access the whole cluster from a single hacked backend Pod.

To prevent this create a NetworkPolicy called np-backend in Namespace project-snake . It should allow the *backend-** Pods only to:

connect to *db1-*** Pods on port 1111
connect to
db2-**** Pods on port 2222
Use the app label of Pods in your policy.

After implementation, connections from *backend-*** Pods to vault-**** Pods on port 3333 should for example no longer work.


译文:

曾经发生过一起安全事件,一个入侵者能够从一个被入侵的后端Pod访问整个集群。

为了防止这种情况,在 namespace project-snake 中创建一个名为 np-backend 的NetworkPolicy。它应该只允许 *backend-** Pods进入。

  • 连接到1111端口的db1-* Pods
  • 连接到2222端口的db2-* Pods。

在你的策略中使用Pods的 app 标签。

实施后,例如,从 *backend-*** Pods到3333端口的 vault-**** Pods的连接应该不再工作。

继续阅读“CKA 模拟真题 Killer.sh | Question 24 | NetworkPolicy”

CKA 模拟真题 Killer.sh | Question 23 | Kubelet client/server cert info

Task weight: 2%

Use context: kubectl config use-context k8s-c2-AC

Node cluster2-node1 has been added to the cluster using kubeadm and TLS bootstrapping.

Find the "Issuer" and "Extended Key Usage" values of the cluster2-node1:

kubelet client certificate, the one used for outgoing connections to the kube-apiserver.
kubelet server certificate, the one used for incoming connections from the kube-apiserver.
Write the information into file /opt/course/23/certificate-info.txt .

Compare the "Issuer" and "Extended Key Usage" fields of both certificates and make sense of these.


译文:

节点 cluster2-node1 已经使用kubeadm和TLS引导添加到集群中。

找到cluster2-node1的 "Issuer "和 "Extended Key Usage "值。

kubelet客户端证书,用于向外连接kube-apiserver。
kubelet服务器证书,用于从kube-apiserver传入的连接。
将这些信息写入文件 /opt/course/23/certificate-info.txt

比较两个证书的 "Issuer "和 "Extended Key Usage"字段,并对这些内容进行理解。

继续阅读“CKA 模拟真题 Killer.sh | Question 23 | Kubelet client/server cert info”

CKA 模拟真题 Killer.sh | Question 22 | Check how long certificates are valid

Task weight: 2%

Use context: kubectl config use-context k8s-c2-AC

Check how long the kube-apiserver server certificate is valid on cluster2-controlplane1 . Do this with openssl or cfssl. Write the exipiration date into /opt/course/22/expiration .

Also run the correct kubeadm command to list the expiration dates and confirm both methods show the same date.

Write the correct kubeadm command that would renew the apiserver server certificate into /opt/course/22/kubeadm-renew-certs.sh .


译文:

检查 kube-apiserver 服务器证书在 cluster2-controlplane1 上的有效时间。用openssl或cfssl做这个。把有效期写进 /opt/course/22/expiration

同时运行正确的 kubeadm 命令来列出到期日期,并确认两种方法都显示相同的日期。

将更新apiserver服务器证书的正确 kubeadm 命令写入 /opt/course/22/kubeadm-renew-erts.sh

继续阅读“CKA 模拟真题 Killer.sh | Question 22 | Check how long certificates are valid”

CKA 模拟真题 Killer.sh | Question 21 | Create a Static Pod and Service

Use context: kubectl config use-context k8s-c3-CCC

Create a Static Pod named my-static-pod in Namespace default on cluster3-controlplane1 . It should be of image nginx:1.16-alpine and have resource requests for 10m CPU and 20Mi memory.

Then create a NodePort Service named static-pod-service which exposes that static Pod on port 80 and check if it has Endpoints and if it's reachable through the cluster3-controlplane1 internal IP address. You can connect to the internal node IPs from your main terminal.


译文:

cluster3-controlplane1 上创建一个名为 my-static-pod 的命名空间。镜像为 nginx:1.16-alpine ,并且有10m CPU和20Mi内存的资源请求。

然后创建一个名为 static-pod-service 的 NodePort 服务,在80端口公开静态Pod,并检查它是否有Endpoints,是否可以通过 cluster3-controlplane1 的内部IP地址到达。你可以从你的主终端连接到内部节点的IP。

继续阅读“CKA 模拟真题 Killer.sh | Question 21 | Create a Static Pod and Service”