HPA
HPA
HPA的全稱為Horizontal Pod Autoscaling,它可以根據(jù)當(dāng)前pod資源的使用率(如CPU、磁盤、內(nèi)存等),進(jìn)行副本數(shù)的動(dòng)態(tài)的擴(kuò)容與縮容,以便減輕各個(gè)pod的壓力。當(dāng)pod負(fù)載達(dá)到一定的閾值后,會(huì)根據(jù)擴(kuò)縮容的策略生成更多新的pod來(lái)分擔(dān)壓力,當(dāng)pod的使用比較空閑時(shí),在穩(wěn)定空閑一段時(shí)間后,還會(huì)自動(dòng)減少pod的副本數(shù)量
前提條件:系統(tǒng)應(yīng)該能夠獲取當(dāng)前Pod的資源使用情況(意思是可以執(zhí)行 kubectl top pod命令,并且能夠得到反饋信息)
heapster:這個(gè)組件之前是集成在k8s集群的,不過(guò)在1.12版本之后就被移除了。如果還想使用此功能,應(yīng)該部署metricServer這個(gè)k8s集群資源使用情況的聚合器
要是想實(shí)現(xiàn)自動(dòng)擴(kuò)容縮容的功能,還需要部署heapster服務(wù),而這個(gè)服務(wù)集成在Prometheus的MetricServer服務(wù)中,也就是說(shuō)需要部署Prometheus服務(wù),但是我們也可以直接部署heapster服務(wù)
實(shí)現(xiàn)Pod的擴(kuò)容與縮容示例
因?yàn)閔eapster集成在MetricServer服務(wù)中,所以首先部署這個(gè)服務(wù)
1、首先安裝MerticServer服務(wù),從Github上克隆項(xiàng)目
[root@master ~]# git clone https://github.com/kubernetes-incubator/metrics-server.git2、修改yaml文件
[root@master ~]# vim metrics-server/deploy/kubernetes/metrics-server-deployment.yaml image: k8s.gcr.io/metrics-server-amd64:v0.3.1 #更換鏡像版本 //在44行添加 command: - /metrics-server - --kubelet-insecure-tls - --kubelet-preferred-address-types=InternalIP3、下載metrics-server鏡像k8s.gcr.io/metrics-server-amd64:v0.3.1(因?yàn)閲?guó)內(nèi)無(wú)法訪問(wèn)k8s.gcr.io,所以采用以下辦法)
pull-google-container 工具腳本
[root@master ~]# vim pull-google.sh image=$1 echo $1 img=`echo $image | sed 's/k8s\.gcr\.io/anjia0532\/google-containers/g;s/gcr\.io/anjia0532/g;s/\//\./g;s/ /\n/g;s/_/-/g;s/anjia0532\./anjia0532\//g' | uniq | awk '{print ""$1""}'` echo "docker pull $img" docker pull $img echo "docker tag $img $image" docker tag $img $image [root@master ~]# chmod +x pull-google.sh && cp pull-google.sh /usr/local/bin/pull-google-container [root@master ~]# pull-google-container k8s.gcr.io/metrics-server-amd64:v0.3.14、將鏡像打包發(fā)給k8s各個(gè)節(jié)點(diǎn)
[root@master ~]# docker save > metrics-server-amd64.tar k8s.gcr.io/metrics-server-amd64:v0.3.1 [root@master ~]# scp metrics-server-amd64.tar node01:/root [root@master ~]# scp metrics-server-amd64.tar node02:/root [root@node01 ~]# docker load < metrics-server-amd64.tar [root@node02 ~]# docker load < metrics-server-amd64.tar5、運(yùn)行yaml文件
[root@master ~]# kubectl apply -f metrics-server/deploy/kubernetes/ clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created serviceaccount/metrics-server created deployment.apps/metrics-server created service/metrics-server created clusterrole.rbac.authorization.k8s.io/system:metrics-server created clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created [root@master ~]# kubectl get pod -n kube-system metrics-server-849dcc6bb4-hr5xp 1/1 Running 0 13s6、驗(yàn)證
[root@master ~]# kubectl top node error: metrics not available yet #這里等一會(huì)就行 [root@master ~]# kubectl top pod -n kube-system metrics-server-849dcc6bb4-hr5xp NAME CPU(cores) MEMORY(bytes) metrics-server-849dcc6bb4-hr5xp 1m 13Mi [root@master ~]# kubectl top node NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% master 56m 2% 1145Mi 66% node01 12m 0% 478Mi 27% node02 11m 0% 452Mi 26%這里,我們使用一個(gè)測(cè)試鏡像,這個(gè)鏡像基于php-apache制作的docker鏡像,包含了一些可以運(yùn)行cpu密集計(jì)算任務(wù)的代碼(模擬壓力測(cè)試)
[root@master ~]# kubectl run php-apache --image=mirrorgooglecontainers/hpa-example:latest --requests=cpu=200m --expose --port=80 [root@master ~]# kubectl get deployments. NAME READY UP-TO-DATE AVAILABLE AGE php-apache 1/1 1 1 33s [root@master ~]# kubectl get pod NAME READY STATUS RESTARTS AGE php-apache-794cdd478f-l9kxn 1/1 Running 0 6m27s [root@master ~]# kubectl top pod php-apache-794cdd478f-l9kxn NAME CPU(cores) MEMORY(bytes) php-apache-794cdd478f-l9kxn 0m 9Mi創(chuàng)建HPA控制器
[root@master ~]# kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10 horizontalpodautoscaler.autoscaling/php-apache autoscaled [root@master ~]# kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE php-apache Deployment/php-apache 0%/50% 1 10 1 2m1s 限制cpu使用率不能超過(guò)50%,最少有一個(gè)Pod,最多有10個(gè)實(shí)時(shí)監(jiān)控Pod的狀態(tài)
[root@master ~]# kubectl get pod -w NAME READY STATUS RESTARTS AGE php-apache-794cdd478f-l9kxn 1/1 Running 0 40m創(chuàng)建一個(gè)應(yīng)用,用來(lái)不停的訪問(wèn)我們剛剛創(chuàng)建的php-apache的svc資源
[root@master ~]# kubectl run -i --tty load-generator --image=busybox /bin/sh進(jìn)入Pod內(nèi),執(zhí)行此命令用來(lái)模擬訪問(wèn)php-apache的svc資源
#對(duì)Pod進(jìn)行死循環(huán)請(qǐng)求 / # while true; do wget -q -O- http://php-apache.default.svc.cluster.local; done運(yùn)行一段時(shí)間后查看pod的數(shù)量變化
NAME READY STATUS RESTARTS AGE load-generator-7d549cd44-xm98c 1/1 Running 1 25m php-apache-867f97c8cb-4r6sk 1/1 Running 0 19m php-apache-867f97c8cb-4rcpk 1/1 Running 0 13m php-apache-867f97c8cb-5pbxf 1/1 Running 0 16m php-apache-867f97c8cb-8htth 1/1 Running 0 13m php-apache-867f97c8cb-d94h9 0/1 ContainerCreating 0 13m php-apache-867f97c8cb-drh52 1/1 Running 0 18m php-apache-867f97c8cb-f67bs 0/1 ContainerCreating 0 17m php-apache-867f97c8cb-nxc2r 1/1 Running 0 19m php-apache-867f97c8cb-vw74k 1/1 Running 0 39m php-apache-867f97c8cb-wb6l5 0/1 ContainerCreating 0 15m當(dāng)停止死循環(huán)請(qǐng)求后,也并不會(huì)立即減少pod數(shù)量,會(huì)等一段時(shí)間后減少pod數(shù)量,防止流量再次激增。
至此,pod副本數(shù)量的自動(dòng)擴(kuò)縮容就實(shí)現(xiàn)了
[root@master ~]# kubectl get hpa -w NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE php-apache Deployment/php-apache 106%/50% 1 10 8 50m php-apache Deployment/php-apache 102%/50% 1 10 8 50m php-apache Deployment/php-apache 93%/50% 1 10 8 51m php-apache Deployment/php-apache 87%/50% 1 10 8 51m php-apache Deployment/php-apache 82%/50% 1 10 8 51m php-apache Deployment/php-apache 77%/50% 1 10 8 51m php-apache Deployment/php-apache 68%/50% 1 10 8 52m php-apache Deployment/php-apache 61%/50% 1 10 8 52m資源限制
以下只是yaml文件中的一段,并不是整個(gè)yaml文件
基于Pod
kubernetes對(duì)資源的限制實(shí)際上是通過(guò)cgroup來(lái)控制的,cgroup是容器的一組用來(lái)控制內(nèi)核如何運(yùn)行進(jìn)程的相關(guān)屬性集合,針對(duì)內(nèi)存、cpu和各種設(shè)備都有對(duì)應(yīng)的cgroup
默認(rèn)情況下,Pod運(yùn)行沒(méi)有cpu和內(nèi)存的限額,這意味著系統(tǒng)中的任何Pod將能夠想執(zhí)行該P(yáng)od所在的節(jié)點(diǎn)一樣,消耗足夠多的cpu和內(nèi)存,一般會(huì)針對(duì)某些應(yīng)用的pod資源進(jìn)行資源限制,這個(gè)資源限制通過(guò)resources的requeste和limits來(lái)實(shí)現(xiàn)
[root@master ~]# vim cgroup-pod.yaml spec:containers:- name: xxximagePullPolicy: Alwaysimage: xxxports:- protocol: TCPcontainerPort: 80resources:limits:cpu: "4"memory: 2Girequests:cpu: 260mmemory: 260Mirequests:要分配的資源,limits為最高請(qǐng)求的資源值。可以簡(jiǎn)單的理解為初始值和最大值
基于名稱空間
1)計(jì)算資源配額
[root@master ~]# vim compute-resources.yaml apiVersion: v1 kind: ResourceQuota metadata:name: compute-resources spec:hard:pods: "20"requests.cppu: "20"requests.memory: 100Gilimits.cpu: "40"limits.memory: 200Gi2)配置對(duì)象數(shù)量配額限制
[root@master ~]# vim object-counts.yaml apiVersion: v1 kind: ResourceQuota metadata:name: object-counts spec:hard: configmaps: "10"persistentvolumeclaims: "4"replicationcontrollers: "20"secrets: "10"service.loadbalancers: "2"3)配置CPU和內(nèi)存的LimitRange
[root@master ~]# vim limitRange.yaml apiVersion: v1 kind: LimiRange metadata:name: mem-limit-range spec:limits:- default:memory: 50Gicpu: 5defaultRequest:memory: 1Gicpu: 1type: Containerdefault即limit的值
defaultRequest即request的值
總結(jié)