日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

HPA控制器

發布時間:2024/3/12 编程问答 61 豆豆
生活随笔 收集整理的這篇文章主要介紹了 HPA控制器 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

HPA簡介

HPA(Horizontal Pod Autoscaler),Pod水平自動縮放器,可以根據Pod的負載動態調整Pod的副本數量,業務高峰期自動擴容Pod副本以滿足業務請求。在業務低峰期自動縮容Pod,實現節約資源的目的。

與HPA相對的是VPA (Vertical Pod Autoscaler),Pod垂直自動縮放器,可以基于Pod的資源利用率,調整對單個Pod的最大資源限制,不能與HPA同時使用。

HPA隸屬于autoscaling API群組目前主要有v1和v2兩個版本:

版本描述
autoscaling/v1只支持基于CPU指標的縮放
autoscaling/v2支持基于Resource Metrics(資源指標,例如Pod 的CPU和內存)、Custom Metrics(自定義指標)和External Metrics(額外指標)的縮放

部署metrics Server

HPA需要通過Metrics Server來獲取Pod的資源利用率,所以需要先部署Metrics Server。

Metrics Server是Kubernetes 集群核心監控數據的聚合器,它負責從kubelet收集資源指標,然后對這些指標監控數據進行聚合,并通過Metrics API將它們暴露在Kubernetes apiserver中,供水平Pod Autoscaler和垂直Pod Autoscaler使用。也可以通過kubectl top node/pod查看指標數據

準備鏡像

nerdctl pull bitnami/metrics-server:0.6.1 nerdctl tag bitnami/metrics-server:0.6.1 harbor-server.linux.io/kubernetes/metrics-server:0.6.1 nerdctl push harbor-server.linux.io/kubernetes/metrics-server:0.6.1

部署文件如下:

apiVersion: v1 kind: ServiceAccount metadata:labels:k8s-app: metrics-servername: metrics-servernamespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata:labels:k8s-app: metrics-serverrbac.authorization.k8s.io/aggregate-to-admin: "true"rbac.authorization.k8s.io/aggregate-to-edit: "true"rbac.authorization.k8s.io/aggregate-to-view: "true"name: system:aggregated-metrics-reader rules: - apiGroups:- metrics.k8s.ioresources:- pods- nodesverbs:- get- list- watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata:labels:k8s-app: metrics-servername: system:metrics-server rules: - apiGroups:- ""resources:- nodes/metricsverbs:- get - apiGroups:- ""resources:- pods- nodesverbs:- get- list- watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata:labels:k8s-app: metrics-servername: metrics-server-auth-readernamespace: kube-system roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: extension-apiserver-authentication-reader subjects: - kind: ServiceAccountname: metrics-servernamespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata:labels:k8s-app: metrics-servername: metrics-server:system:auth-delegator roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:auth-delegator subjects: - kind: ServiceAccountname: metrics-servernamespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata:labels:k8s-app: metrics-servername: system:metrics-server roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:metrics-server subjects: - kind: ServiceAccountname: metrics-servernamespace: kube-system --- apiVersion: v1 kind: Service metadata:labels:k8s-app: metrics-servername: metrics-servernamespace: kube-system spec:ports:- name: httpsport: 443protocol: TCPtargetPort: httpsselector:k8s-app: metrics-server --- apiVersion: apps/v1 kind: Deployment metadata:labels:k8s-app: metrics-servername: metrics-servernamespace: kube-system spec:selector:matchLabels:k8s-app: metrics-serverstrategy:rollingUpdate:maxUnavailable: 0template:metadata:labels:k8s-app: metrics-serverspec:containers:- args:- --cert-dir=/tmp- --secure-port=4443- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname- --kubelet-use-node-status-port- --metric-resolution=15simage: harbor-server.linux.io/kubernetes/metrics-server:0.6.1imagePullPolicy: IfNotPresentlivenessProbe:failureThreshold: 3httpGet:path: /livezport: httpsscheme: HTTPSperiodSeconds: 10name: metrics-serverports:- containerPort: 4443name: httpsprotocol: TCPreadinessProbe:failureThreshold: 3httpGet:path: /readyzport: httpsscheme: HTTPSinitialDelaySeconds: 20periodSeconds: 10resources:requests:cpu: 100mmemory: 200MisecurityContext:allowPrivilegeEscalation: falsereadOnlyRootFilesystem: truerunAsNonRoot: truerunAsUser: 1000volumeMounts:- mountPath: /tmpname: tmp-dirnodeSelector:kubernetes.io/os: linuxpriorityClassName: system-cluster-criticalserviceAccountName: metrics-servervolumes:- emptyDir: {}name: tmp-dir --- apiVersion: apiregistration.k8s.io/v1 kind: APIService metadata:labels:k8s-app: metrics-servername: v1beta1.metrics.k8s.io spec:group: metrics.k8s.iogroupPriorityMinimum: 100insecureSkipTLSVerify: trueservice:name: metrics-servernamespace: kube-systemversion: v1beta1versionPriority: 100

創建之后查看Pod狀態:

驗證metrics-server是否工作

可以獲取node和pod的資源指標就表示metrics-server可以正常工作

HPA配置參數

HPA控制器有一些重要配置參數,用于控制Pod縮放的行為,這些參數都可以在kube-controller的啟動參數中配置:

  • –horizontal-pod-autoscaler-sync-period:查詢Pod資源利用率的時間間隔,默認15s查詢一次
  • –horizontal-pod-autoscaler-downscale-stabilization:兩次縮容操作之間的最小間隔周期,默認5m
  • –horizontal-pod-autoscaler-cpu-initialization-period:初始化延遲時間,在此期間內Pod的CPU指標將不生效,默認5m
  • –horizontal-pod-autoscaler-initial-readiness-delay:用于設置Pod初始化時間,在此期間內內的Pod被認為未就緒不會被采集數據,默認30s
  • –horizontal-pod-autoscaler-tolerance:HPA控制器能容忍的數據差異(浮點數,默認0.1),即當前指標與閾值的差異要在0.1之內,比如閾值設置的是CPU利率50%,如果當前CPU利用率為80%,那么80/50=1.6>1.1,就會觸發擴容;如果當前CPU利用率為40%,40/50=0.8<0.9,就會觸發縮容。大于1.1擴容,小于0.9縮容

HPA示例

下面使用HAP v1版本通過CPU指標實現Pod自動擴縮容。

自動縮容示例:

先部署一個5副本的nginx deployment,再通過HPA實現縮容:

apiVersion: apps/v1 kind: Deployment metadata:name: nginx-deploylabels:app: nginx spec:replicas: 3selector:matchExpressions:- {key: "app", operator: In, values: ["nginx"]}template:metadata:labels:app: nginxspec:containers:- name: nginximage: nginx:latestports:- name: httpcontainerPort: 80resources: #如果要通過hpa實現pod的自動擴縮容,在必須對Pod設置資源限制,否則pod不會被hpa統計requests:cpu: 500mmemory: 512Milimits:cpu: 1memory: 1Gi

hpa部署文件如下,在hpa中定義了Pod cpu利用率閾值為80%,最小副本數為3,最大副本數為10:

apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata:name: pod-autoscaler-demo spec:minReplicas: 3 #最小副本數maxReplicas: 10 #最大副本數scaleTargetRef: #hpa監控的資源對象apiVersion: apps/v1kind: Deploymentname: nginx-deploytargetCPUUtilizationPercentage: 80 #cpu利用率閾值

創建完成后,查看hpa資源:

因為之前創建的nginx pod訪問量較低,cpul利用率肯定不超過80%,所以等待一段時間就會觸發縮容

因為在hpa中定義的最小副本數為3,所以縮容到3個Pod就不會縮容了

自動擴容示例:

使用stress-ng鏡像部署3個pod來測試自動擴容,stress-ng是一個壓測工具:

apiVersion: apps/v1 kind: Deployment metadata:name: stress-ng-deploylabels:app: stress-ng spec:replicas: 3selector:matchExpressions:- {key: "app", operator: In, values: ["stress-ng"]}template:metadata:labels:app: stress-ngspec:containers:- name: stress-ngimage: lorel/docker-stress-ngargs: ["--vm", "2", "--vm-bytes", "512M"]resources:requests:cpu: 500mmemory: 512Milimits:cpu: 1memory: 1Gi

hpa部署文件如下:

apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata:name: pod-autoscaler-demo1 spec:minReplicas: 3maxReplicas: 10scaleTargetRef:apiVersion: apps/v1kind: Deploymentname: stress-ng-deploytargetCPUUtilizationPercentage: 80

查看hpa資源:

stress-ng會將Pod的cpu利用率打滿,所以等待一段時間hpa就會逐步提高pod的副本數,如下圖所示,但是在hpa中定義的最大副本數為10,所以最多擴容到10個Pod就不會擴容了

總結

以上是生活随笔為你收集整理的HPA控制器的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。