pod资源限制,探针,指定资源
文章目錄
- 資源限制
- 重啟策略:Pod在遇到故障之后重啟的動作
- 測試重啟策略腳本
- 健康檢查:又稱為探針(Probe)
- 指定資源創建
資源限制
Pod和Container的資源請求和限制:spec.containers[].resources.limits.cpu //cpu上限 spec.containers[].resources.limits.memory //內存上限spec.containers[].resources.requests.cpu //創建時分配的基本CPU資源spec.containers[].resources.requests.memory //創建時分配的基本內存資源我們創建一個資源限制
[root@localhost ~]# vim pod2.yaml apiVersion: v1 kind: Pod metadata:name: frontend spec:containers:- name: dbimage: mysqlenv:- name: MYSQL_ROOT_PASSWORDvalue: "password"resources:requests:memory: "64Mi"cpu: "250m"limits:memory: "128Mi"cpu: "500m"- name: wpimage: wordpressresources:requests:memory: "64Mi"cpu: "250m"limits:memory: "128Mi"cpu: "500m"發布資源
[root@localhost ~]# kubectl apply -f pod2.yaml查看具體事件
[root@localhost demo]# kubectl describe pod frontendEvents:Type Reason Age From Message---- ------ ---- ---- -------Normal Scheduled 11m default-scheduler Successfully assigned default/frontend to 192.168.136.30Normal Pulling 11m kubelet, 192.168.136.30 pulling image "mysql"Normal Pulled 10m kubelet, 192.168.136.30 Successfully pulled image "mysql"Normal Created 10m kubelet, 192.168.136.30 Created containerNormal Started 10m kubelet, 192.168.136.30 Started containerNormal Pulling 10m kubelet, 192.168.136.30 pulling image "wordpress"Normal Pulled 7m30s kubelet, 192.168.136.30 Successfully pulled image "wordpress"Normal Created 7m29s kubelet, 192.168.136.30 Created containerNormal Started 7m27s kubelet, 192.168.136.30 Started container查看節點資源
[root@localhost ~]# kubectl describe node 192.168.136.30Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits--------- ---- ------------ ---------- --------------- -------------default frontend 500m (50%) 1 (100%) 128Mi (14%) 256Mi (29%)default my-tomcat-7cd4fdbb5b-bb4qx 0 (0%) 0 (0%) 0 (0%) 0 (0%)default nginx-7697996758-w4z29 0 (0%) 0 (0%) 0 (0%) 0 (0%)kube-system kubernetes-dashboard-7dffbccd68-6bqvn 50m (5%) 100m (10%) 100Mi (11%) 300MiAllocated resources:(Total limits may be over 100 percent, i.e., overcommitted.)Resource Requests Limits-------- -------- ------cpu 550m (55%) 1100m (110%)memory 228Mi (26%) 556Mi (63%) Events: <none>查看node節點資源狀態
[root@localhost ~]# kubectl get pods NAME READY STATUS RESTARTS AGE frontend 2/2 Running 0 30m查看命名空間
[root@localhost ~]# kubectl get ns NAME STATUS AGE default Active 5d2h kube-public Active 5d2h kube-system Active 5d2h重啟策略:Pod在遇到故障之后重啟的動作
1:Always:當容器終止退出后,總是重啟容器,默認策略
2:OnFailure:當容器異常退出(退出狀態碼非0)時,重啟容器
3:Never:當容器終止退出,從不重啟容器。
(注意:k8s中不支持重啟Pod資源,只有刪除重建)
查看策略
[root@localhost demo]# kubectl edit deployrestartPolicy: Always測試重啟策略腳本
[root@localhost k8s]# vim pod3.yaml apiVersion: v1 kind: Pod metadata:name: foo spec:containers:- name: busyboximage: busyboxargs:- /bin/sh- -c- sleep 8; exit 3sleep休眠
exit 退出
發布資源
[root@localhost demo]# kubectl apply -f pod3.yaml查看狀態(不停的在重啟)
[root@localhost k8s]# kubectl get pods -w foo 1/1 Running 0 37s foo 0/1 Error 0 45s foo 1/1 Running 1 58s foo 0/1 Error 1 66s foo 0/1 CrashLoopBackOff 1 77s foo 1/1 Running 2 80s foo 0/1 Error 2 88s完成狀態不會進行重啟
[root@localhost demo]# vim pod3.yaml apiVersion: v1 kind: Pod metadata:name: foo spec:containers:- name: busyboximage: busyboxargs:- /bin/sh- -c- sleep 10;exit 3restartPolicy: Never開啟服務(完成狀態不會進行重啟)
[[root@localhost ~]# kubectl apply -f pod3.yaml健康檢查:又稱為探針(Probe)
(注意:)規則可以同時定義
livenessProbe 如果檢查失敗,將殺死容器,根據Pod的restartPolicy來操作。
ReadinessProbe 如果檢查失敗,kubernetes會把Pod從service endpoints中剔除。
Probe支持三種檢查方法:
httpGet 發送http請求,返回200-400范圍狀態碼為成功。
exec 執行Shell命令返回狀態碼是0為成功。
tcpSocket 發起TCP Socket建立成功
創建探針服務
[root@localhost ~]# vim pod4.yaml apiVersion: v1 kind: Pod metadata:labels:test: livenessname: liveness-exec spec:containers:- name: livenessimage: busyboxargs:- /bin/sh- -c- touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy;sleep 30livenessProbe:exec:command:- cat- /tmp/healthyinitialDelaySeconds: 5periodSeconds: 5開啟服務
[root@localhost ~]# kubectl apply -f pod4.yaml查看服務(我看到不停的在重啟探針)
[root@localhost ~]# kubectl get pods NAME READY STATUS RESTARTS AGE liveness-exec 0/1 CrashLoopBackOff 6 18m指定資源創建
[root@localhost ~]vim pod5.yamll apiVersion: v1 kind: Pod metadata:name: pod-examplelabels:app: nginx spec:nodeName: 192.168.136.40containers:- name: nginximage: nginx:1.15[root@localhost ~]# kubectl create -f pod5.yaml我們看到沒有經過調度直接指定地址
[root@localhost ~]# kubectl describe pod pod-example Events:Type Reason Age From Message---- ------ ---- ---- -------Normal Pulling 3m42s kubelet, 192.168.136.40 pulling image "nginx:1.15"Normal Pulled 3m13s kubelet, 192.168.136.40 Successfully pulled image "nginx:1.15"Normal Created 3m12s kubelet, 192.168.136.40 Created containerNormal Started 3m12s kubelet, 192.168.136.40 Started container定義別名
[root@localhost ~]# kubectl label nodes 192.168.136.40 kgc=a [root@localhost ~]# kubectl label nodes 192.168.136.30 kgc=b [root@localhost ~]# kubectl get nodes --show-labels (查看別名) NAME STATUS ROLES AGE VERSION LABELS 192.168.136.30 Ready <none> 5d6h v1.12.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kgc=b,kubernetes.io/hostname=192.168.136.30 192.168.136.40 Ready <none> 5d6h v1.12.3 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kgc=a,kubernetes.io/hostname=192.168.136.40創建別名資源
[root@localhost ~]# vim pod6.yaml apiVersion: v1 kind: Pod metadata:name: pod-examplelabels:app: nginx spec:nodeSelector:kgc: bcontainers:- name: nginximage: nginx:1.15root@localhost ~]# kubectl apply -f pod6.yaml查看地址
[root@localhost ~]# kubectl get pods -o wide pod-example 1/1 Running 0 62s 172.17.27.4 192.168.136.30 <none>查看是否經過調度
[root@localhost ~]# kubectl describe pod pod-example Events:Type Reason Age From Message---- ------ ---- ---- -------Normal Scheduled 2m33s default-scheduler Successfully assigned default/pod-example to 192.168.136.30Normal Pulling 2m26s kubelet, 192.168.136.30 pulling image "nginx:1.15"Normal Pulled 112s kubelet, 192.168.136.30 Successfully pulled image "nginx:1.15"Normal Created 112s kubelet, 192.168.136.30 Created containerNormal Started 111s kubelet, 192.168.136.30 Started container調度過程
-節點預選:基于一系列的預選規則對每個節點進行檢查,將那些不符合條件的節點過濾,從而完成節點的預選
-節點優選:對預選出的節點進行優先級排序,以便選出最合適運行Pod對象的節點
-節點選定:從優先級排序結果中挑選出優先級最高的節點運行Pod,當這類節點多于1個時,則進行隨機選擇
機制,一旦 Etcd 存儲 Pod 信息成功便會立即通知APIServer
5. APIServer會立即把Pod創建的消息通知Scheduler,Scheduler發現 Pod 的屬性中 Dest Node 為空時(Dest Node=””)便會立即觸發調度流程進行調度。
6. 而這一個創建Pod對象,在調度的過程當中有3個階段:節點預選、節點優選、節點選定,從而篩選出最佳的節點
-節點預選:基于一系列的預選規則對每個節點進行檢查,將那些不符合條件的節點過濾,從而完成節點的預選
-節點優選:對預選出的節點進行優先級排序,以便選出最合適運行Pod對象的節點
-節點選定:從優先級排序結果中挑選出優先級最高的節點運行Pod,當這類節點多于1個時,則進行隨機選擇
總結
以上是生活随笔為你收集整理的pod资源限制,探针,指定资源的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 内存4k随机读取技术:提升计算速度,减少
- 下一篇: 安装部署OpenStack(添加资源)