日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

CKA真题

發布時間:2023/12/18 编程问答 32 豆豆
生活随笔 收集整理的這篇文章主要介紹了 CKA真题 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

CKA真題解析

###1、Set configuration context $kubectl config use-context k8s. Monitor the logs of Pod foobar and Extract log lines corresponding to error unable-to-access-website . Write them to /opt/KULM00612/foobar.

  • 翻譯:設置配置上下文$kubectl config use context k8s,監控Pod foobar的日志,并提取錯誤“unable-to-access-website”對應的日志行。把它們寫到/opt/KULM00612/foobar。
  • 解析:就是看下一個pod中的日志,把滿足條件的日志行保存在某一文件中
kubectl config use-context k8s kubectl logs foobar | grep 'unable-to-access-website' > opt/KULM00612/foobar

檢查:

  • kubectl logs foobar 考試時這個pod輸出很少,就7,8行,你可以看到你要的日支行,就一行滿足條件
cat opt/KULM00612/foobar #看下結果,核對下

###2、Set configuration context $kubectl config use-context k8s. List all PVs sorted by capacity, saving the full kubectl output to /opt/KUCC0006/my_volumes. Use kubectl own functionally for sorting the output, and do not manipulate it any further

  • 翻譯:設置配置上下文$kubectl config use context k8s。列出按容量排序的所有PV,將完整的kubectl輸出保存到/opt/KUCC0006/my_volumes。在功能上使用kubectl 本身對輸出進行排序,不要再對其進行任何操作
    *解析:pv排序
kubectl config use-context k8s kubectl get pv -A --sort-by={.spec.capacity.storage} > /opt/KUCC0006/my_volumes

*注意看下,他是要所有的PV,所以-A是所有命名空間
命令怎么查?
kubectl get pv --help


###3、Set configuration context $kubectl config use-context k8s. Ensure a single instance of Pod nginx is running on each node of the Kubernetes cluster where nginx also represents the image name which has to be used. Do no override any taints currently in place. Use Daemonset to complete this task and use ds.kusc00612 as Daemonset name

  • 翻譯:設置配置上下文$kubectl config use context k8。確保在Kubernetes集群的每個節點上運行Pod nginx的單個實例,其中nginx還表示必須使用的映像名稱。不要覆蓋當前存在的任何污點。使用Daemonset完成此任務并使用ds.kusc00612作為守護進程名稱

  • 解析:考察DaemonSet,不需要容忍某些節點的污點

  • 參考網址:
    https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/

kubectl config use-context k8s
  • vi ds.kusc00612.yaml
apiVersion: apps/v1 kind: DaemonSet metadata:name: ds.kusc00612labels:k8s-app: ds.kusc00612 spec:selector:matchLabels:name: ds.kusc00612template:metadata:labels:name: ds.kusc00612spec:containers:- name: nginximage: nginx

運行后看下,一般不是所有節點都有這個pod的,因為有些節點上有污點,你可以使用以下命令看下有幾個節點是有污點:

kubectl describe nodes | grep Taints

一般有幾個節點就會輸出幾行,沒污點的節點所在行是空的。然后用kubectl get nodes看下哪個節點是有污點。然后kubectl get po -o wide| grep 0612.看下確實的節點是不是有污點的節點。以此可做檢查。


###4、Set configuration context $kubectl config use-context k8s Perform the following tasks: Add an init container to lumpy-koala(which has been defined in spec file /opt/kucc00100/pod-specKUCC00612.yaml). The init container should create an empty file named /workdir/calm.txt. If /workdir/calm.txt is not detected, the Pod should exit. Once the spec file has been updated with the init container definition, the Pod should be created

  • 翻譯:執行以下任務:將init容器添加到lumpy-koala(已在文件/opt/kucc00100/pod-specKUCC00612.yaml中定義)。init容器應該創建一個名為/workdir/calm.txt的空文件. 如果/workdir/calm.txt未檢測到,Pod應退出。一旦用init容器定義更新了spec文件,就應該創建Pod
    解析:這道題在/opt/kucc00100/pod-specKUCC00612.yaml路徑下已經有寫好的Yaml了,但是還未在集群中創建該對象。所以你上去最好先kubectl get po | grep pod名字。發現集群還沒有該pod。所以你就先改下這個Yaml,然后apply.先創建Initcontainer,然后在里面創建文件,/workdir目錄明顯是個掛載進度的目錄,題目沒規定,你就定義empDir類型。這邊還要用到liveness檢查。

  • 參考網址:

  • https://kubernetes.io/docs/concepts/workloads/pods/init-containers/#using-init-containers
  • https://kubernetes.io/docs/concepts/storage/volumes/#emptydir
  • https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/

kubectl config use-context k8s
在pod-specKUCC00612.yaml后面添加

initContainers:- name: init-containerimage: busyboxcommand: ['sh', '-c', "touch /workdir/calm.txt"]volumeMounts:- mountPath: /workdirname: cache-volumevolumes:- name: cache-volumeemptyDir: {}


kubectl apply -f xxx.yaml
kubectl get po | grep KUCC00612
running后
使用:kubectl exec -ti KUCC00612 – ls /workdir
看下有沒有calm.txt以做檢查

###5、Set configuration context $kubectl config use-context k8s. Create a pod named kucc6 with a single container for each of the following images running inside(there may be between 1 and 4 images specified):nginx +redis+memcached+consul。

  • 翻譯:創建一個名為kucc6的pod,其中包含運行在其中的以下映像的單個容器(可能會指定1到4個映像):nginx+redis+memcached+consur
  • 解析:創建pod,有四個鏡像

參考網址:https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/

vi kucc6.yaml apiVersion: v1 kind: Pod metadata:name: kucc6labels:env: kucc6 spec:containers:- name: nginximage: nginx- name: redisimage: redis- name: memcachedimage: memcached- name: consulimage: consul kubectl apply -f xx.yaml kubeget get po

###6、Set configuration context $kubectl config use-context k8s Schedule a Pod as follows: Name: nginxkusc00612 Image: nginx Node selector: disk=ssd

  • 翻譯:創建 Pod,名字為 nginx,鏡像為 nginx,部署到 label disk=ssd的node上

  • 解析:pod調度到指定節點,Nodeselector

  • 官網搜索nodeselector找yaml文件

kubectl config use-context k8s apiVersion: v1 kind: Pod metadata:name: nginxkusc00612labels:app: nginxkusc00612 spec:containers:- name: nginximage: nginximagePullPolicy: IfNotPresentnodeSelector:disktype: ssd kubectl apply -f nginxkusc00612.yaml kubectl get po -o wide | grep nginxkusc00612 看一下所屬節點 kubectl get nodes --show-labels | grep disk 出來的節點看下是不是上面查出來的節點。集群中節點一般就3個,很容易看的。

###7、Set configuration context $kubectl config use-context k8s. Create a deployment as follows: Name: nginxapp Using container nginx with version 1.11.9-alpine. The deployment should contain 3 replicas. Next, deploy the app with new version 1.12.0-alpine by performing a rolling update and record that update.Finally,rollback that update to the previous version 1.11.9-alpine.

  • 解析:部署deploy,然后修改進鏡像(滾動更新),然后回滾上一版本
  • 官網搜索Deployment
kubectl config use-context k8s apiVersion: apps/v1 kind: Deployment metadata:name: nginxapplabels:app: nginxapp spec:replicas: 3selector:matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec:containers:- name: nginximage: nginx:1.11.9-alpineports:- containerPort: 80 kubectl apply -f nginxapp.yaml kubectl set image deployment/nginxapp nginx=nginx:1.12.0-alpine --record=true #--record加上,現在這樣就是滾動更新了 kubectl rollout history deployment/nginxapp 看下滾動歷史 然后執行 kubectl rollout undo deployment/nginxapp 就完成了,記不住命令沒有關系,kubectl set image --help查看,kubectl rollout --help查看。

###8、Set configuration context $kubectl config use-context k8s Create and configure the service front-endservice so it’s accessible through NodePort/ClusterIp and routes to the existing pod named front-end

  • 解析:創建service,指定后端到已有pod: front-end
  • 先查看下front-end的Pod是否存在
kubectl get po | grep front-end

看下front-end的鏡像,kubectl get po -o yaml | grep image.我看到是nginx,那后面就可以用來檢查了。然后看下label,后面service的selector需要用到
只要寫個service就夠了

參考網址:https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/

kubectl config use-context k8s apiVersion: v1 kind: Service metadata:name: front-end-service spec:selector:app: front-end #這邊的標簽一定要和front-end的Label一致type: NodePortports:- protocol: TCPport: 80targetPort: 80nodePort: 30080 kubectl apply -f xxx.yaml 檢查: kubectl get nodes ssh nodename culr serviceIp:80 出來nginx的歡迎頁,就沒啥問題了

###9、Set configuration context $kubectl config use-context k8s Create a Pod as follows: Name: jenkins Using image: jenkins In a new Kubernetes namespace named pro-test

  • 解析:在新的命名空間中創建jenkins的pod
kubectl config use-context k8s

看下pro-test命名空間是否存在,一般是不存在

kubectl get ns | grep pro-test kubectl create ns pro-test

官網搜索Pod
然后增加namespace字段:

apiVersion: v1 kind: Pod metadata:name: jenkinsnamespace: pro-testlabels:env: jenkins spec:containers:- name: jenkinsimage: jenkins kubectl apply -f xx.yaml kubectl get po -n pro-test | grep jenkins

###10、Set configuration context $kubectl config use-context k8s Create a deployment spec file that will: Launch 7 replicas of the redis image with the label : app_enb_stage=dev Deployment name: kual00612 Save a copy of this spec file to /opt/KUAL00612/deploy_spec.yaml (or .json) When you are done,clean up(delete) any new k8s API objects that you produced during this task

  • 翻譯:設置配置上下文$kubectl config use context k8s創建一個部署規范文件,該文件將:啟動redis映像的7個副本,標簽為:app_enb_stage=dev deployment name:kual00612將此規范文件的副本保存到/opt/kual00612/deploy_spec.yaml(或.json)完成后,清理(刪除)您在此任務期間生成的任何新的k8sapi對象
  • 解析:創建7副本的redis的deploy,指明標簽,然后把yaml保存在指定位置
    參考網址:https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
kubectl config use-context k8s apiVersion: apps/v1 kind: Deployment metadata:name: kual00612labels:app_enb_stage: dev spec:replicas: 7selector:matchLabels:app: kual00612 #標簽最好都用deployment的名字template:metadata:labels:app: kual00612spec:containers:- name: redisimage: redis


kubectl apply -f xx.yaml
kuectl get po | grep kual00612 running就ok了
然后kubectl delete -f xx.yaml 題目要求刪除該資源
cat xx.yaml > /opt/KUAL00612/deploy_spec.yaml
cat /opt/KUAL00612/deploy_spec.yaml #檢查看下

###11、Set configuration context $kubectl config use-context k8s Create a file /opt/KUCC00612/kucc00612.txt that lists all pods that implement Service foo in Namespace production. The format of the file should be one pod name per line.

  • 解析:滿足foo service選擇規則的pod,并把名字寫入某個文件
kubectl config use-context k8s kubectl get svc -n production #看下foo在不在 kubectl get svc -n production -o yaml | grep selector

一定要看selector的label而不是service的label。selector的label才是選個后端pod的label。然后看到Label是app=blog.

kubectl get po -n production -l app=blog | grep -v NAME | awk '{print $1}' >/opt/KUCC00612/kucc00612.txt
  • 看下是不是pod名稱 cat /opt/KUCC00612/kucc00612.txt

###12、Set configuration context $kubectl config use-context k8s Create a Kubernetes Secret as follows: Name: super-secret credential: blob, Create a Pod named pod-secrets-via-file using the redis image which mounts a secret named super-secret at /secrets. Create a second Pod named pod-secretsvia-env using the redis image, which exports credential as

  • 翻譯:創建一個Kubernetes Secret,如下所示:Name:super Secret credential:blob,使用redis映像創建一個名為Pod secrets的Pod,該映像在/secrets處掛載一個名為super Secret的機密。使用redis映像創建第二個名為Pod secretsvia env的Pod,它將憑證導出為憑證
    *解析:創建secret,并在pod中通過Volume和環境變量使用該secret
  • 參考網址:https://kubernetes.io/zh/docs/concepts/configuration/secret/
kubectl config use-context k8s

對blob進行base64編碼,然后才能放入secret

echo -n 'blob' | base64

創建yaml:

apiVersion: v1 kind: Secret metadata:name: super-secret type: Opaque data:credential: YWRtaW4= kubectl apply -f secret.yaml
  • 創建pod:pod-secrets-via-file
apiVersion: v1 kind: Pod metadata:name: pod-secrets-via-file spec:containers:- name: pod-secrets-via-fileimage: redisvolumeMounts:- name: foomountPath: "/secrets"readOnly: truevolumes:- name: foosecret:secretName: super-secret kubectl apply -f pod-secrets-via-file.yaml
  • 創建pod:pod-secretsvia-env
apiVersion: v1 kind: Pod metadata:name: pod-secretsvia-env spec:containers:- name: mycontainerimage: redisenv:- name: SECRET_USERNAMEvalueFrom:secretKeyRef:name: supersecretkey: username- name: CREDENTIALvalueFrom:secretKeyRef:name: supersecretkey: credentialrestartPolicy: Never kubectl apply -f pod-secretsvia-env.yaml

檢驗:


kubectl exec -ti pod-secretsvia-env – env ##會打印出很多環境變量,看下你定義的在不在,最后進pod: echo $CREDENTIAL 確認下結果
kubectl exec -ti pod-secrets-via-file – ls /secrets

###13、Set configuration context $kubectl config use-context k8s Create a pod as follows: Name: nonpersistent-redis Container image: redis Named-volume with name: cache-control Mount path : /data/redis It should launch in the pre-prod namespace and the volume MUST NOT be persistent.

  • 解析:創建一個pod,并掛載volume
  • 官網搜索volume
kubectl config use-context k8s apiVersion: v1 kind: Pod metadata:name: nonpersistent-redis spec:containers:- image: redisname: redisvolumeMounts:- mountPath: /data/redisname: cache-controlvolumes:- name: cache-controlemptyDir: {}


kubectl apply -f xxx.yaml
kubectl get po | grep nonpersistent。看下是不是running
kubectl exec -ti nonpersistent-redis – ls /data 看下是否有redis目錄

###14、Set configuration context $kubectl config use-context k8s Scale the deployment webserver to 6 pods

  • 解析:擴縮容
kubectl config use-context k8s kubectl get deployment | grep webserver 看下是否存在 運行kubectl scale --help,查看例子 kubectl scale deploymnet/webserver --replicas=6

###15、Set configuration context $kubectl config use-context k8s Check to see how many nodes are ready (not including nodes tainted NoSchedule) and write the number to /opt/nodenum.

  • 解析:有多少節點是ready狀態的,不包含被打了NoSchedule污點的節點
kubectl config use-context k8s kubectl get nodes看下ready的個數M, kubectl describe nodes | grep Taints,看下打印出來的污點是否有NoSchedule。看下NoSchedule行數N. number=M-N 將數字保存在/opt/nodenum文件中

###16、Set configuration context $kubectl config use-context k8s From the Pod label name=cpu-utilizer, find pods running high CPU workloads and write the name of the Pod consuming most CPU to the file /opt/cpu.txt (which already exists).

  • 解析:題目意思是從label是name=cpu-utilizer的pod中找出使用cpu最高的Pod,并把pod的Name寫入/opt/cpu.txt。
kubectl config use-context k8s kubectl top --help 查看top的使用 kubectl top pod -l name=cpu-utilizer 應該會出來多個滿足條件的Pod,查看pod的cpu小號字段最高的,然后將其名字寫入/opt/cpu.txt

###17、Set configuration context $kubectl config use-context k8s Create a deployment as follows: Name: nginxdns Exposed via a service : nginx-dns Ensure that the service & pod are accessible via their respective DNS records The container(s) within any Pod(s) running as a part of this deployment should use the nginx image. Next, use the utility nslookup to look up the DNS records of the service & pod and write the output to /opt/service.dns and /opt/pod.dns respectively. Ensure you use the busybox:1.28 image (or earlier) for any testing, an the latest release has an upstream bug which impacts the use of nslookup

  • 解析:創建service和deployment,然后解析service的dns和pod的dns,并把解析記錄保存到指定文件

  • 參考網址:https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/

kubectl config use-context k8s * vi deployment.yaml apiVersion: apps/v1 kind: Deployment metadata:name: nginxdns spec:selector:matchLabels:app: nginxdnsreplicas: 1template:metadata:labels:app: nginxdnsspec:containers:- name: nginximage: nginxports:- name: http
  • vi service.yaml
apiVersion: v1 kind: Service metadata:name: ngixdns spec:selector:app: nginxdnsports:- protocol: TCPport: 80targetPort: http

https://kubernetes.io/docs/concepts/workloads/pods/init-containers/

  • vi busybox.yaml
apiVersion: v1 kind: Pod metadata:name: busybox-testlabels:app: busybox-test spec:containers:- name: myapp-containerimage: busybox:1.28command: ['sh', '-c', 'echo The app is running! && sleep 3600'] #sleep 3600一定要啊,不然busybox運行后直接退出 kubectl apply -f busybox.yaml kubectl apply -f deployment.yaml kubectl apply -f service.yaml kubectl exec -ti busybox-test -- nslookup nginx-dns > /opt/service.dns kubectl exec -ti busybox-test -- nslookup podup > /opt/pod.dns

###18、No configuration context change required for this item Create a snapshot of the etcd instance running at https://127.0.0.1:2379 saving the snapshot to the file path /data/backup/etcd-snapshot.db The etcd instance is running etcd version 3.2.18 The following TLS certificates/key are supplied for connecting to the server with etcdctl CA certificate: /opt/KUCM0612/ca.crt Client certificate: /opt/KUCM00612/etcdclient.crt Client key: /opt/KUCM00612/etcd-client.key

  • 解析:備份etcd,手冊中有該命令,需要指定證書的話,看下etcdctl --help

  • 參考網址:https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/#backing-up-an-etcd-cluster

  • 需要指定證書,所以你使用etcdctl --help查看下證書相關的字段

export ETCDCTL_API=3 etcdctl snapshot save --help etcdctl --endpoints= https://127.0.0.1:2379 --cert="/opt/KUCM000613/etcd-client.crt" -- cacert="/opt/KUCM00612/ca.crt" --key="/opt/KUCM00612/etcd-client.key" snapshot save /data/backup/etcd-snapshot.db
  • 接著既可以檢查下你備份的文件:
ETCDCTL_API=3 etcdctl --write-out=table snapshot status /data/backup/etcd-snapshot.db 有以下輸出,就沒問題 +----------+----------+------------+------------+ | HASH | REVISION | TOTAL KEYS | TOTAL SIZE | +----------+----------+------------+------------+ | fe01cf57 | 10 | 7 | 2.1 MB | +----------+----------+------------+------------+

###19、Set configuration context $kubectl config use-context ek8s Set the node labelled with name=ek8s-node-1 as unavailable and reschedule all the pods running on it.

  • 解析:將標簽未name=ek8s-node-1設置成不可用且把這個節點上面的pod調度到其他節點上去。其實就是使用kubectl drain命令


kubectl config use-context ek8s # 切換集群
kubectl get node --show-labels | grep name=ek8s-node-1
kubectl get pods -o wide | grep ek9s-node-1
kubectl cordon ek9s-node-1 # 先設置為cordon不讓其再被調度,就是不再讓其有pod,保證下面刪除有效.這步考試時我沒執行,我覺得不影響,是沒人在我操作時操作集群的
kubectl drain ek9s-node-1 --ignore-daemonsets
kubectl get nodes查看下 ek9s-node-1這個節點會有unschedule的標記

###20、Set configuration context $kubectl config use-context wk8s A Kubernetes worker node,labelled with name=wk8s-node-0 is in state NotReady. Investigate why this is the case, and perform any appropriate steps to bring the node to a Ready state, Ensuring that any changes are made permanent. Hints: You can ssh to the failed node using $ssh wk8s-node-0. You can assume elevated privileges on the node with the following command $sudo -i

  • 題目解析:wk8s-node-0是NotReady狀態,你需要處理下,使其變為ready,別更改需要永久性


kubectl config use-context wk8s
kubectl get nodes #發現wk8s-node-0是NotReady狀態
ssh wk8s-node-0
sudo -i
systemctl status kubelet #發現沒啟動
systemctl start kubelet
systemctl enable kubelet
exit #從root切到user
exit #從wk8s-node-0回到Master節點
kubectl get nodes # 發現wk8s-node-0是ready狀態了

###21、Set configuration context $kubectl config use-context wk8s Configure the kubelet system managed service,on the node labelled with name=wk8s-node-1, to Launch a Pod containing a single container of image nginx named myservice automatically. Any spec files required should be placed in the /etc/kubernetes/manifests directory on the node. Hints: You can ssh to the failed node using $ssh wk8snode-1. You can assume elevated privileges on the node with the following command $sudo -i

  • 題目解析:考察的是靜態Node
  • 參考網址:https://kubernetes.io/docs/tasks/configure-pod-container/static-pod/
ssh wk8s-node-1 cd /etc/kubernetes/manifests
  • vi pod.yaml
apiVersion: v1 kind: Pod metadata:name: myservicelabels:role: myrole spec:containers:- name: webimage: nginxports:- name: webcontainerPort: 80protocol: TCP kubelet照理來說會輪詢檢查 /etc/kubernetes/manifests下是否有yaml,有的話就會創建為靜態pod systemctl status kubelet
  • 下面是打印出來的東西:
● kubelet.service - kubelet: The Kubernetes Node AgentLoaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)Drop-In: /usr/lib/systemd/system/kubelet.service.d└─10-kubeadm.confActive: active (running) since 六 2020-04-18 20:44:33 CST; 1 months 30 days agoDocs: https://kubernetes.io/docs/Main PID: 21185 (kubelet)Tasks: 23Memory: 106.9MCGroup: /system.slice/kubelet.service└─21185 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kuber...6月 18 07:04:16 master1 kubelet[21185]: W0618 07:04:16.392108 21185 watcher.go:87] Error while processing event ...ectory 6月 18 07:04:46 master1 kubelet[21185]: W0618 07:04:46.511713 21185 watcher.go:87] Error while processing event ...ectory 6月 18 07:09:36 master1 kubelet[21185]: W0618 07:09:36.698768 21185 watcher.go:87] Error while processing event ...ectory 6月 18 07:09:36 master1 kubelet[21185]: W0618 07:09:36.698947 21185 watcher.go:87] Error while processing event ...ectory 6月 18 07:17:36 master1 kubelet[21185]: W0618 07:17:36.324375 21185 watcher.go:87] Error while processing event ...ectory 6月 18 07:17:36 master1 kubelet[21185]: W0618 07:17:36.324500 21185 watcher.go:87] Error while processing event ...ectory 6月 18 07:17:36 master1 kubelet[21185]: W0618 07:17:36.324554 21185 watcher.go:87] Error while processing event ...ectory 6月 18 07:17:46 master1 kubelet[21185]: W0618 07:17:46.644361 21185 watcher.go:87] Error while processing event ...ectory 6月 18 07:17:46 master1 kubelet[21185]: W0618 07:17:46.644439 21185 watcher.go:87] Error while processing event ...ectory 6月 18 07:29:26 master1 kubelet[21185]: W0618 07:29:26.400216 21185 watcher.go:87] Error while processing event ...ectory Hint: Some lines were ellipsized, use -l to show in full.
  • 查看kubelet啟動配置文件的內容:
cat /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf [Service] Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf" Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml" # This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env # This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use # the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file. EnvironmentFile=-/etc/sysconfig/kubelet ExecStart= ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
  • 發現里面并沒有KUBELET_ARGS參數后面沒有指定pod-manifest-path參數,類似這樣KUBELET_ARGS="–cluster-dns=10.254.0.10 --cluster-domain=kube.local --pod-manifest-path= /etc/kubernetes/manifests"然后只能去看下KUBELET_CONFIG_ARGS=–config=/var/lib/kubelet/config.yaml 這個參數指定的yaml里看下有沒有指定靜態pod的指定路徑,要是也沒有的話,kubelet是不會自動創建靜態Pod的,而且pod-manifest-path沒有默認值。
cat /var/lib/kubelet/config.yaml apiVersion: kubelet.config.k8s.io/v1beta1 authentication:anonymous:enabled: falsewebhook:cacheTTL: 0senabled: truex509:clientCAFile: /etc/kubernetes/pki/ca.crt authorization:mode: Webhookwebhook:cacheAuthorizedTTL: 0scacheUnauthorizedTTL: 0s clusterDNS: - 10.96.0.10 clusterDomain: cluster.local cpuManagerReconcilePeriod: 0s evictionPressureTransitionPeriod: 0s fileCheckFrequency: 0s healthzBindAddress: 127.0.0.1 healthzPort: 10248 httpCheckFrequency: 0s imageMinimumGCAge: 0s kind: KubeletConfiguration nodeStatusReportFrequency: 0s nodeStatusUpdateFrequency: 0s rotateCertificates: true runtimeRequestTimeout: 0s streamingConnectionIdleTimeout: 0s syncFrequency: 0s volumeStatsAggPeriod: 0s

發現沒有指定靜態Pod路徑的參數,在最后添加

staticPodPath: /etc/kubernetes/manifests

然后運行:

systemctl restart kubelet systemctl enable kubelet
  • 然后去Master節點上看下,發現靜態pod起來了,那就ok了

###22、這題是給你兩個節點,master1和node1,和一個admin.conf文件,然后讓你在這兩個節點上部署集群。

  • 解析:
  • 官網文檔:https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#%E5%AE%89%E8%A3%85-kubeadm-kubelet-%E5%92%8C-kubectl
sudo?apt-get?update?&&?sudo?apt-get?install?-y?apt-transport-https?curl ? curl?-s?https://packages.cloud.google.com/apt/doc/apt-key.gpg?|?sudo?apt-key?add?-?? cat?<< EOF?|?sudo?tee?/etc/apt/sources.list.d/kubernetes.list?? deb?https://apt.kubernetes.io/?kubernetes-xenial?main?? EOF ? sudo?apt-get?update?? sudo?apt-get?install?-y?kubelet?kubeadm?kubectl?? sudo?apt-mark?hold?kubelet?kubeadm?kubectl
  • 官網文檔:https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#initializing-your-control-plane-node??

    kubeadm?init?--config?/etc/kubeadm.conf?ignore-preflight-errors=all??#忽略錯誤參數和配置文件都有提供,配置文件不用改,注意審題!
  • 官網文檔:https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network
kubectl?apply?-f?https://docs.projectcalico.org/v3.14/manifests/calico.yaml
  • 看下kubectl get nodes是否ready,看下Kube-system命名空間下pod是否都是running

###23、Set configuration context $kubectl configuse-context bk8s Given a partially-functioning Kubernetes cluser, identify symptoms of failure on the cluter. Determine the node, the failing service and take actions to bring up the failed service and restore the health of the cluser. Ensure that any changes are made permanently. The worker node in this cluster is labelled with name=bk8s-node-0 Hints: You can ssh to the relevant nodes using $ssh $(NODE) where $(NODE) is one of bk8s-master-0 or bk8s-node-0. You can assume elevated privileges on any node in the cluster with the following command: $ sudo -i.

  • 解析:這題的意思是,有個集群部分功能出現問題,需要你去修一下,需要是永久性的修復。
kubectl configuse-context bk8s ssh bk8s-master-0 sudo -i kubectl get po 竟然長時間沒有返回,我覺得apiserver掛了 systemctl status kube-apiserver.service 發現沒有 ps -ef | grep apiserver 也沒有
  • 所以應該不是本地部署的,應該走的是靜態pod的部署方式。
kubectl get po -A | grep apiserver 也沒有? master節點上沒有apiserver,所以問題肯定處在apiserver沒有啟動。 接著我檢查下/etc/kubernetes/manifests,一般靜態pod都會放在這個目錄下,要是沒有,就去查看下Kubelet的配置文件 ls /etc/kubernetes/manifests ###發現apiserver,controllermanager的yaml都在這里。

按21題的方式查看下kubelet的配置,發現靜態Pod的路徑沒配置,就按21題的方式配置好,即在kubelet配置文件中添加:

staticPodPath: /etc/kubernetes/manifests說明下可以在兩個地方配置這個參數, 一個是kubectl.service的配置文件中的 KUBELET_ARGS=--cluster-dns=10.254.0.10 --cluster-domain=kube.local --pod-manifest-path=/etc/kubernetes/manifestsKUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml systemctl restart kubelet systemctl enable kubelet 發現apiserver的pod起來了,kubectl get pod可以正常返回來。 kubectl get nodes都是ready狀態

###24、Set configuration context $kubectl config use-context hk8s Create a persistent volume with name appconfig of capacity 1Gi and access mode ReadWriteMany. The type of volume is hostPath and its locationis /srv/app-config

  • 解析:官網搜索persistent volume
apiVersion: v1 kind: PersistentVolume metadata:name: appconfig spec:capacity:storage: 1GivolumeMode: FilesystemaccessModes:- ReadWriteManyhostPath:path: /srv/app-config kubectl apply -f xx.yaml kubectl get pv 看下這個pv,status是available的就可以了

總結

以上是生活随笔為你收集整理的CKA真题的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。