CKA考试心得
考前須知:
1、一共16題,100分66分及格,考試有兩次機(jī)會(huì)
考試準(zhǔn)備:
1、護(hù)照或或者包含英文名字證件
2、要選擇工作日的早上或者晚上考試,千萬不要選擇周末去考,否則卡到懷疑人生,影響考試結(jié)果
3、提前1小時(shí)等待考試,關(guān)閉VM,webex、teams等服務(wù)就花了30分鐘。
題目:
1、RBAC 4%
題目:
為部署管道創(chuàng)建一個(gè)新的ClusterRole并將其綁定特定的namespace的特定的ServiceAccount
Task?
創(chuàng)建一個(gè)名為deployment-clusterrole且僅允許創(chuàng)建以下類型的新Clusrerrole
Deployment
Statefulset
Damonset
在現(xiàn)有的namespace app-team1中創(chuàng)建一 個(gè)名為cicd-token的新serviceaccount
限于namespace app-team1, 將新的Clusterrole? deployment-clusterrole綁定到新的serviceaccount cicd-token
?答題:
創(chuàng)建clusterrole
kubectl create clusterrole deployment-clusterrole --verb=create --resource=deployments,statefulsets,damemonsets
創(chuàng)建sa
kubectl? -n app-team1 create? serviceaccount cicd-token
創(chuàng)建rolebinding?? 【如果題目中沒有指定名稱空間,則創(chuàng)建的是 cluster-rolebonding,如果有namespace 則創(chuàng)建的是 rolebonding】
kubectl? -n? app-team1 create rolebinding? cicd-token-binding? --clusterrole=deployment-clusterrole? --serviceaccount=app-team1: cicd-token
檢查:
$ kubectl auth can-i create deployment -n app-team1--as=system:serviceaccount: app-team1: cicd-token
YES
2、驅(qū)逐節(jié)點(diǎn)
將名為k8s-node-t 的節(jié)點(diǎn)設(shè)置為不可用,且重新調(diào)度到其他運(yùn)行的所有pod
答題:
Kubectl?? drain? slave1? --delete-emptydir-data ?--igon
ore-daemonsets? --force
3、集群升級(jí)
現(xiàn)有的kubenets集群正在運(yùn)行版本為1.18.8僅將主節(jié)點(diǎn)上的所有kubenets控制平臺(tái)和節(jié)點(diǎn)組件升級(jí)版本至1.19.0
另外在主節(jié)點(diǎn)上升級(jí)kubectl的kubelet
確認(rèn)在升級(jí)之前drain主節(jié)點(diǎn),且在升級(jí)后uncorder主節(jié)點(diǎn),請(qǐng)不要升級(jí)工作節(jié)點(diǎn),etcd,constainer控制器,CNI組件,DNS服務(wù)或者其他任何組件
答題:
1、登陸root的 master
2、source <{}kubectl complction bash>? \\這個(gè)考試環(huán)境不需要執(zhí)行
3、kubectl drain master-1 --ingore-damensets?? 這個(gè)考試環(huán)境不需要--delete-emptydir-data ?
4、apt-cache? show kubeadm | grep 1.19.0
5、apt-get install kubeadm=1.19.0-00
6、kubeadm? version???
7、kubeadm upgrade apply 1.19.0? --etcd-upgrade=false??? 并選擇y
8、apt-get install? kubelet=1.19.0-00???????????? &&? kubelet --version
9、kubectl? get nodes? 發(fā)現(xiàn)master 不可調(diào)度??? &&?? kubectl? uncordcn? master-1
4、ETCD備份還原
首先,為運(yùn)行在http://127.0.0.1:2379上的etcd實(shí)例創(chuàng)建快照,并保存到指定路徑? /var/lib/backup/etcd-snapshot.db
然后還原于 /var/www/etcd/xxxx.db
服務(wù)提供了以下TLS證書和秘鑰,以通過etcdctl鏈接到服務(wù)器
CA證書: 客戶端證書;? 客戶端秘鑰
答案: ??//考試環(huán)境在node1上執(zhí)行 不需要切換環(huán)境
1、指定etcdctl版本為v3????? export ETCDCTL_API=3
2、備份命令:etcdctl? --endpoints=https://127.0.0.1:2379? --cacert="/opt/Kuxxxx/ca.crt"? --cert="/opt/Kuxxxx/server.crt" --key="/opt/Kuxxxx/server.key" snapshot? save?? /var/lib/backup/etcd-snapshot.db
檢查命令: etcdctl? --endpoints=https://127.0.0.1:2379? --cacert="/opt/Kuxxxx/ca.crt"? --cert="/opt/Kuxxxx/server.crt" --key="/opt/Kuxxxx/server.key" stapshot? status? /var/lib/backup/etcd-snapshot.db -wtable
3、還原?? etcdctl? snapshot restore? /var/lib/backup/etcd-snapshot.db cacert="/opt/Kuxxxx/ca.crt"? --cert="/opt/Kuxxxx/server.crt" --key="/opt/Kuxxxx/server.key" stapshot?
檢查命令:? etcdctl? snapshot? status? /var/lib/backup/etcd-snapshot.db -wtable
5、網(wǎng)絡(luò)策略
在已有的namespace forbar中創(chuàng)建一個(gè)名為allow-port? from namespace的新Network policy ,以允許namespace corp-bar訪問其端口9200
不同namespace的情況下:
//需要給namespace? corp-bar打標(biāo) ??project: corp-bar
kubectl label namespace? corp-bar project:corpbar
Vim? 5.yml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
? name: allow-port? from namespace
? namespace: forbar
spec:
? podSelector:
??? {}
? policyTypes:
??? - Ingress
? ingress:
??? - from:
??????? - namespaceSelector:
??????????? matchLabels:
????????????? project: corp-bar
?????? ports:
???????? - protocol: TCP
?????????? port: 9200
相同namespace的情況下:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
? name: allow-port? from namespace
? namespace: forbar
spec:
? podSelector:{}
? policyTypes:
??? - Ingress
? ingress:
??? - from:
??????? - podSelector: {}?????
?????? ports:
???????? - protocol: TCP
?????????? port: 9200
Kubectl apply -f? 5.yml
6、SVC
重新配置現(xiàn)有的deployment frond-end? 并添加名為http的端口規(guī)范,以暴露現(xiàn)有容器nginx的端口80/TCP
創(chuàng)建一個(gè)名為plant-end-svc的新服務(wù),以暴露容器端口HTTP
配置新服務(wù)以通過調(diào)度它們的節(jié)點(diǎn)上的Node? port暴露各個(gè)pod
解答:
1、kubectl? get deployment.apps
2、kubectl? edit deployment.apps
在name: nginx下增加
????????? ports:
- containerPort: 80
? name:? http
3、創(chuàng)建svc?
Kubectl? create? svc? nodeport? front-end-svc --tcp=80:80
修改標(biāo)簽??? 與deployment的pod標(biāo)簽一致
查看標(biāo)簽: kubectl? get? svc front-end-svc? --show-labels
修改app: front-end
//必須要改 否則不通,需要登陸到 master節(jié)點(diǎn) 去 curl? svc的ip地址 ?為 nginx就可以了
kubectl? edit? svc front-end-svc?
修改app: front-end-svc? 為app: front-end
7、ingress
創(chuàng)建一個(gè)名為nginx的ingress并遵守以下規(guī)則:
Name? pong
Namespace? ing-internal
Exposing service hello on? path/test 使用端口5678
驗(yàn)證: curl -Kl?? <INTERNAL_IP>/hello
//這個(gè)需要把要把原有的service 從 nodeport改成clusterip
?Kubectl? edit? service hello 修改 ?type nodeport改成clusterip
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
? name: pong
? namespace:? ?ing-internal
? annotations:
??? nginx.ingress.kubernetes.io/rewrite-target: /
spec:
? rules:
? - http:
????? paths:
????? - path: /hello
??????? pathType: Prefix
??????? backend:
????????? service:
??????????? name: hello
??????????? port:
????????????? number: 5678
8、擴(kuò)容
將deployment?? web-server擴(kuò)容到4個(gè)
Kubectl? get? delolyments.apss
Kubectl scale? deployment web-server? --replicas=4
9、通過node標(biāo)簽調(diào)度pod
按以下規(guī)則調(diào)度pod:
Name? nginx.kusc004001
Image? nginx
Node? selector : disk ssd
解答:
1、kubectl? get nodes? --show-labels
2、kubectl run? nginx.kusc004001 --image=nginx --dry-run=client -oyaml > 9.yaml
3、編輯yaml文件,在resource下 添加
nodeSelector:?
???? disk: ssd
10、節(jié)點(diǎn)數(shù)量
檢查并查看有多少節(jié)點(diǎn)準(zhǔn)備就緒(不包括不可調(diào)度的節(jié)點(diǎn))并將其寫入/opt/1.txt
Kubectl describe? node|grep? Taint?
None 有2個(gè)
echo 2 > /opt/1.txt
11、創(chuàng)建多容器
創(chuàng)建一個(gè)名為kucc3的pod,并且使用以下鏡像(可能包含了1到4個(gè)鏡像)
Nginx+redis+memached+coocsal
解答:
kubectl run? nginx.kusc004001kucc3 --image=nginx --dry-run=client -oyaml > 11.yaml
增加另外三個(gè)image
12、PV
創(chuàng)建一個(gè)名為app.config的pv、1G大小、權(quán)限為ReadOnlyMany使用hostpath類型掛載到本地位置priveage-config
apiVersion: v1
kind: PersistentVolume
metadata:
? name: app.config
? labels:
??? type: local
spec:
??? capacity:
??? storage: 1Gi
? accessModes:
??? - ReadWriteOnce
? hostPath:
??? path: "/mnt/data"
13、PVC
創(chuàng)建一個(gè)pvc滿足以下要求:
name: pv-volume
class: cni-hostpath-vc
Cagcity:?? 10Mi
創(chuàng)建一個(gè)pod并掛載pvc
name: test
image: nginx
Mount path? /usr/share/nginx/
編輯pod volume權(quán)限為readonlyOnce
最后,使用kubectl? edit或者kubectl patch將pvc大小改為70Mi
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
? name: pv-volume
spec:
? storageClassName: ?cni-hostpath-vc
? accessModes:
??? - ReadWriteOnce
? resources:
??? requests:
????? storage: 10Mi
apiVersion: v1
kind: Pod
metadata:
? name: test
spec:
? volumes:
??? - name: pv-volume
????? persistentVolumeClaim:
??????? claimName: pv-volume
? containers:
??? - name: nginx
????? image: nginx
????? ports:
??????? - containerPort: 80
????????? name: "http-server"
????? volumeMounts:
??????? - mountPath: "/usr/share/nginx/html"
????????? name: pv-volume
14、輸出日志
監(jiān)控pod的日志
并將error字段的日志輸出到/var/log/1.txt
Kubectl run bar --image=nginx
Kubectl? logs?? bar?? |grep ‘題目的關(guān)鍵字’ > /var/log/1,txt
15、sidecar
在pod? big-corp-app中增加一個(gè)busybox的sitecar。新的sidecar容器使用以下命令:
/bin/sh < tailf? -n+1 /var/log/app.log
掛載一個(gè)logs卷并確保 /var/log/app.log文件在 sidecar中可達(dá)
不要修改已經(jīng)存在的容器
不要修改日志路徑與文件
解答:
1、kubectl? get pod? big-corp-app -oyaml > 15.yaml
2、在15.yaml中編輯
??? volumeMounts:下面增加
??? - name: logs
????? mountPath: /var/log
? - name: busybox
??? image: busybox:1.28
??? args: [/bin/sh, -c, 'tail -n+1 -F /var/log/app.log']
??? volumeMounts:
??? - name: logs
????? mountPath: /var/log
? volumes:增加
? - name: logs
??? emptyDir: {}
16 top
從pod標(biāo)簽為name-cpu-loader中找到CPU負(fù)載最大的pod名稱,并輸出到 log、中
解答:
Kubectl? top? pod? -A? -l? app=fannel? --sort-by='cpu'
echo? 'cpu使用最大的pod的名字'? > log
17 kubelet
名為k8s-node-C的工作節(jié)點(diǎn)狀態(tài)為Notready,找到并解決問題
解答:
Ssh?? k8s-node-C節(jié)點(diǎn)上
Systemctl? start kubelet
Systemctl? enable kubelet
總結(jié)
- 上一篇: 【bzoj4972】小Q的方格纸 前缀
- 下一篇: 使用svn报错之An error occ