日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

prometheus-adapter自定义hpa

發布時間:2024/3/12 编程问答 40 豆豆
生活随笔 收集整理的這篇文章主要介紹了 prometheus-adapter自定义hpa 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

prometheus-adapter支持自定義hpa

  • 生成prometheus-adapter 證書
  • 使用剛剛生成的證書創建secret
    • 查看生成的secret
  • prometheus-adapter deployment掛載secret
  • prometheus-adapter配置文件組成
  • 基于istio請求量配置
  • 基于istio響應時間的配置
  • prometheus-adapter配置的規則如何與kube-controller-manager交互
    • 基于Pod做HPA
    • 基于cpu或者memory做HPA
  • prometheus-adapter配置文件中的<<.LabelMatchers>>和<<.GroupBy>>是何時填充,由誰填充
  • 參考
    • hpa demo
    • 遇到的問題
    • 參考

生成prometheus-adapter 證書

export PURPOSE=adapteropenssl req -x509 -sha256 -new -nodes -days 365 -newkey rsa:2048 -keyout ${PURPOSE}-ca.key -out ${PURPOSE}-ca.crt -subj "/CN=ca"

使用剛剛生成的證書創建secret

kubectl create secret generic prometheus-adapter-serving-certs --from-file=adapter-ca.crt --from-file=adapter-ca.key -n monitoring

查看生成的secret

[root@m1 katy]# kc get secret prometheus-adapter-serving-certs -n monitoring -o yaml apiVersion: v1 data:adapter-ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM3VENDQWRXZ0F3SUJBZ0lKQU8wY21xZFRGdlV3TUEwR0NTcUdTSWIzRFFFQkN3VUFNQTB4Q3pBSkJnTlYKQkFNTUFtTmhNQjRYRFRJeE1EZ3hNVEEzTXpZek1Wb1hEVEl5TURneE1UQTNNell6TVZvd0RURUxNQWtHQTFVRQpBd3dDWTJFd2dnRWlNQTBHQ1NxR1NJYjNEUUVCQVFVQUE0SUJEd0F3Z2dFS0FvSUJBUURqY0djaG5VK0RGS01XCjJUaG9lMHZucndlTnhQMW1RS1RDZ0RwNFV6Yk13RkIzaWhjVTZYSnQ1NHBSbDl1VHJ3NGxreDNleUNMYUlpM0QKTFpkK2tOYStwcHkxWFJxcVdGbEpjSk1WNk4zdy91dzl3bnVFZUdMYUZrOUNlWk5TZm04eTVtOHpKSjhuNDArcApERFRMcElFVUdFdWk0UzRTTDhyMWpjSVM3SHhnUWRsUy9ZYkJ2a2YwN1o2U3lLck80bnFoa0hLQkZMWks0UkFjCjhxQ0ZTYkIvSFVqVTYrcFRPdXBDK2U1VHUxd0tqS1RndVFYOTdyYlB0aUxpSmpNMlEydzFXdy8rTEd2N2NZYVUKNW5VWTdPQ0dvWjh5dklQY2hIU1Y3T2luQnBKYTNpMG5yek0wQkdRMUM5MHJuRm1jK2puYVRWMWgwd2toeHhJTgpGbmVyVkFBSEFnTUJBQUdqVURCT01CMEdBMVVkRGdRV0JCU0lwbjA3OU1oelRXY0VNRmJud25hQzdTdTU2REFmCkJnTlZIU01FR0RBV2dCU0lwbjA3OU1oelRXY0VNRmJud25hQzdTdTU2REFNQmdOVkhSTUVCVEFEQVFIL01BMEcKQ1NxR1NJYjNEUUVCQ3dVQUE0SUJBUUE4MS9ubjVoMmYveG1kNWdJUEJxdXdoTXRnc2VmRlVoSDE3emplMHFSOQpreG1VUGZESnZwbkk0WEkxUmNaMnUwaWUzeEg3dTdPSFZqZUNFdGtwV0xJc0hMamM3c3lIdG5UNktrWXNaR0RnCnVjVlR2bEhIaG5zWnRmaEhxSW4valVIRnJSWXFmQklhVFoxV09hTWVKZm5jZkRSU3JNeHorN0ltRjV2ckRaZHgKYlYxK0FHUUpDKytyeHhrRk9TVmx2azJEV3lHQ3NDVW1YdUlqVmsvSEp5QzBtT09POFZUbjNKaWZwQnFTVXhzSQpodC9XZUR2aEhVQXRMWEtIUXlLWjBtUWVnbWt3V2lwOHpkMFUzTENmRzZsQWZMRlZISldZWVowY1gySEYrY0FSCklFcGQwSEpINjVYTDlwTDRXckQ0dHR2T0M1ZEpzeVljOWZKQmxCbVp2U3psCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0Kadapter-ca.key: LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV3QUlCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktvd2dnU21BZ0VBQW9JQkFRRGpjR2NoblUrREZLTVcKMlRob2Uwdm5yd2VOeFAxbVFLVENnRHA0VXpiTXdGQjNpaGNVNlhKdDU0cFJsOXVUcnc0bGt4M2V5Q0xhSWkzRApMWmQra05hK3BweTFYUnFxV0ZsSmNKTVY2TjN3L3V3OXdudUVlR0xhRms5Q2VaTlNmbTh5NW04ekpKOG40MCtwCkREVExwSUVVR0V1aTRTNFNMOHIxamNJUzdIeGdRZGxTL1liQnZrZjA3WjZTeUtyTzRucWhrSEtCRkxaSzRSQWMKOHFDRlNiQi9IVWpVNitwVE91cEMrZTVUdTF3S2pLVGd1UVg5N3JiUHRpTGlKak0yUTJ3MVd3LytMR3Y3Y1lhVQo1blVZN09DR29aOHl2SVBjaEhTVjdPaW5CcEphM2kwbnJ6TTBCR1ExQzkwcm5GbWMram5hVFYxaDB3a2h4eElOCkZuZXJWQUFIQWdNQkFBRUNnZ0VCQUlHRXZDWkhXRVZVVmorbnVkaStCZzdNL09jOSsvUGo4aStWS0RiblpIaWIKTi9lckd0UGMwVDVITWR5Zk52cldJSjlETlNwdUhITE9MZk5OSGsyRUc5WjhPUmVMQ3FsaElJK1MzU0FIK1lQSgpHQzFmZUVtSzZQZzY1aTM3MytxRmQ3dXJ3RDJHcUYvbHNiS1o4ZUxhTG11TUhsNkdEMTlwK2hGMkJjUVRDZzBoCnY1cURDa21SSUNzVkJ2OTdQY3p3TXk2YTQyaUFHcUJYTERkWHhQSzJ3dUp3bkRKR01vdGVhcHBWbWVkWG91MFIKVkFrNHdYL0hKdWRUT2ZmaXZIRGtSdG5SMXg1YzZNWHNjajZ0ZlRFYUVsUHIvUHNWUDZVbTVPNmRyU201TnQweQorVG9Nb2VsRUdkbmRneTdoQmFaYmZpdlJEN1VzdlR0QlJCd0hjdGJlZmpFQ2dZRUErMzNQcWEyMzFORVdCRUdRCkV2czNzdnMrQ3QzTktUUE40MmM5WkNmc2VzRG95cU1wamhRTWZ6RnFxbFBrMEpSV3RtdncxdER3YTZNYllwTWIKMERObjNmaDJGT0crOEVxcllERCthRUVCYk5YV0wwWng3SXY0cnZhSUFjT3dheXYxZVhlZGNpYWk2VzJiWkVTeApqVmFQM0E1MGErQTVTMDQ4MDNVNFFZRVYzbWtDZ1lFQTU0UTB6b0x6NVpuK2drSUJXSEFUU2Z5VjJMRWh5Z292Ci9MbUZtaUVuZWVCSk96OVJURVl5NGtlc0QwNUhUOFh4Vkc1bHRLWTZtby9QclNnUlE3Ylg5b3FYMVdQSGI3L2IKT2s4alVyNWQyMmVvVVZSZUc2TUJqNmY2V3pHRWVqdnZwY2JYOHp5RXYvaGJzQnhYL1QrbXNQYm93ZkVGajlBKwpmSmZwaGczcS9POENnWUVBMnBualZmUWdaS1pSNHVVeVhLMXRIdkJ3WDNXb2pYWHdNd2hjUHFETlYyNHphMkFrCkVOR3dneWJyTnA2eHQvUVk0M3d6M2lYRHRXd1RzNzEzWWFRdFZxNVB4WnJzSTJaa1RMcUppUWxvT2JndDh1M2kKdk9CMkMyOVRqV1VTQmpZeHE2R2pnOE85dS9XQUtzbmpJNTNvY2psR1RUYWIxcTl0QThsU1d1M2ZtbkVDZ1lFQQpsUkM5dzMzenRoRDZHenFPalRmVVk1MzdpWU03ZzFBZDU3WTRQSzQrTWEzazJQNEN4WDZwZ3FLdE9VbW9oc2VuCmhEcDB4K1VEOU1MRjcvTE5jdkVHaXBwZitxaDlJQW5EQ1A4dGVqaFNURk9vdjN6My93bHNsdWVNUGkxYTVDMDEKTjJNWlptYS8vcTdWc2tYOXJYVFBTa0FnUzhkNVVraStBeEQ0N2pTRjZnY0NnWUVBMGlqNEFDWDZKRUhqeFBtSQpBcE1Gd0tOZzRtK2loWG1xem9nN0VRRTVaZ003eTZlS1hFS0VDNnlHVVVURnlFSGpWSHY1QWFlQ1FDQTBJa2ZLCmlTMTFwcEx6VCtHanpGV1FXUTR4Vml0NUpnaE4vRWNtV1ZjdzhLazhWR25IVlYxc25BNjJ6RllJaU44QWxDU0kKbEk1Z2RyYWpKQkdjMWVqdEUrQlVSRFhOYnlrPQotLS0tLUVORCBQUklWQVRFIEtFWS0tLS0tCg== kind: Secret metadata:creationTimestamp: "2021-09-17T10:07:43Z"name: prometheus-adapter-serving-certs11namespace: monitoringresourceVersion: "10403385"selfLink: /api/v1/namespaces/monitoring/secrets/prometheus-adapter-serving-certs11uid: 2c8d9e35-1bde-4acf-8297-5ad84d788cd4 type: Opaque

prometheus-adapter deployment掛載secret

在args里面增加以下內容

- --tls-cert-file=/var/run/adapter-cert/adapter-ca.crt- --tls-private-key-file=/var/run/adapter-cert/adapter-ca.key

prometheus-adapter配置文件組成

prometheus-adapter配置文件由三部分組成:
基于內存和cpu的資源規則,在resourceRules模塊下;
基于自定義指標的規則,在rules模塊下;
基于外部指標的規則,在externalRules模塊下,暫時還沒使用到這個。

resourceRules默認是cpu和memory,不支持新增別的類型。

基于istio請求量配置

在promatheus-adapter的配置文件中的rules模塊下,配置5分鐘的請求量。
通過name.as將該規則重命名為http_requests_5m

- seriesQuery: 'istio_requests_total{namespace!="",pod!=""}'resources:overrides:namespace: resource: "namespace"pod: resource: "pod"name:matches: "^(.*)_total"as: "http_requests_5m"metricsQuery: 'sum(rate(<<.Series>>{<<.LabelMatchers>>}[5m])) by (<<.GroupBy>>)'

基于istio響應時間的配置

在prometehus-adapter配置文件的rules模塊下,配置5分鐘內的響應時間。
通過name.as將該規則重命名為http_requests_restime_5m。

- seriesQuery: '{__name__=~"istio_request_duration_milliseconds_.*",namespace!="",pod!="",reporter="destination"}'seriesFilters:- isNot: .*bucketresources:overrides:namespace: resource: namespacepod: resource: podname:matches: ^(.*)as: "http_requests_restime_5m"metricsQuery: 'sum(rate(istio_request_duration_milliseconds_sum{<<.LabelMatchers>>}[5m]) > 0) by (<<.GroupBy>>) / sum(rate(istio_request_duration_milliseconds_count{<<.LabelMatchers>>}[5m]) > 0) by (<<.GroupBy>>)'

配置文件配置完成后,重啟prometheus-adapter pod讓配置生效。

prometheus-adapter配置的規則如何與kube-controller-manager交互

要區分是資源指標(resourceRules)還是自定義指標(rules)。
在創建HPA對象的時候需要指定spec.type,如果type為Resource則代表是要基于資源指標(cpu、memory)做HPA,如果type為Pods,則代表是基于pod做HPA。

基于Pod做HPA


kube-controller-manager通過HPA對象中的metric.name去查詢是否存在這個規則名稱,這個規則名稱正常情況下是prometheus-adapter配置文件中設置好的,比如之前設置的http_requests_5m。

基于cpu或者memory做HPA


當是基于cpu或者memory做HPA的時候,kube-controller-manager通過resource類型查詢的。

prometheus-adapter配置文件中的<<.LabelMatchers>>和<<.GroupBy>>是何時填充,由誰填充

當kube-controller-manager周期性的(默認15s,由參數–horizontal-pod-autoscaler-sync-period指定)查詢當前資源使用情況的時候,通過HPA對象的namespace和spec.scaleTargetRef找到需要被HPA的對象的spec.selector.matchLabels填充給<<.LabelMatchers>>。

<<.GroupBy>>暫時不會賦值,但是不會影響最終效果的。

參考

https://itnext.io/horizontal-pod-autoscale-with-custom-metrics-8cb13e9d475

hpa demo

[root@m1 prometheus-adapter]# kc get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE sample-app Deployment/sample-app 66m/500m 1 10 1 4m4s [root@m1 prometheus-adapter]# kd hpa Name: sample-app Namespace: cloudtogo-system Labels: app=sample-app Annotations: kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"autoscaling/v2beta1","kind":"HorizontalPodAutoscaler","metadata":{"annotations":{},"labels":{"app":"sample-app"},"name":"sa... CreationTimestamp: Thu, 05 Aug 2021 07:47:42 +0000 Reference: Deployment/sample-app Metrics: ( current / target )"http_requests_per_second1" on pods: 66m / 50m Min replicas: 1 Max replicas: 10 Deployment pods: 10 current / 10 desired Conditions:Type Status Reason Message---- ------ ------ -------AbleToScale True ReadyForNewScale recommended size matches current sizeScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_requests_per_second1ScalingLimited True TooManyReplicas the desired replica count is more than the maximum replica count Events:Type Reason Age From Message---- ------ ---- ---- -------Warning FailedGetPodsMetric 11m (x9 over 13m) horizontal-pod-autoscaler unable to get metric http_requests: unable to fetch metrics from custom metrics API: the server could not find the metric http_requests for podsWarning FailedComputeMetricsReplicas 11m (x9 over 13m) horizontal-pod-autoscaler invalid metrics (1 invalid out of 1), first error is: failed to get pods metric value: unable to get metric http_requests: unable to fetch metrics from custom metrics API: the server could not find the metric http_requests for podsNormal SuccessfulRescale 9m27s horizontal-pod-autoscaler New size: 2; reason: pods metric http_requests_per_second1 above targetNormal SuccessfulRescale 8m27s horizontal-pod-autoscaler New size: 3; reason: pods metric http_requests_per_second1 above targetNormal SuccessfulRescale 7m27s horizontal-pod-autoscaler New size: 4; reason: pods metric http_requests_per_second1 above targetNormal SuccessfulRescale 6m57s horizontal-pod-autoscaler New size: 5; reason: pods metric http_requests_per_second1 above targetNormal SuccessfulRescale 6m26s horizontal-pod-autoscaler New size: 6; reason: pods metric http_requests_per_second1 above targetNormal SuccessfulRescale 5m55s horizontal-pod-autoscaler New size: 8; reason: pods metric http_requests_per_second1 above targetNormal SuccessfulRescale 5m25s horizontal-pod-autoscaler New size: 10; reason: pods metric http_requests_per_second1 above target

遇到的問題

使用部署了prometheus-adapter以后,執行kc top一直報錯:

[root@m1 main]# kc top po -n monitoring Error from server (NotFound): the server could not find the requested resource (get services http:heapster:)


原因在于custom-metrics-apiservice.yaml中缺少了配置

,完整的custom-metrics-apiservice.yaml如下

apiVersion: apiregistration.k8s.io/v1 kind: APIService metadata:name: v1beta1.metrics.k8s.io spec:group: metrics.k8s.iogroupPriorityMinimum: 100insecureSkipTLSVerify: trueservice:name: prometheus-adapternamespace: moniotirngversion: v1beta1versionPriority: 100 --- apiVersion: apiregistration.k8s.io/v1beta1 kind: APIService metadata:name: v1beta1.custom.metrics.k8s.io spec:service:name: prometheus-adapternamespace: moniotirnggroup: custom.metrics.k8s.ioversion: v1beta1insecureSkipTLSVerify: truegroupPriorityMinimum: 100versionPriority: 100 --- apiVersion: apiregistration.k8s.io/v1beta1 kind: APIService metadata:name: v1beta2.custom.metrics.k8s.io spec:service:name: prometheus-adapternamespace: moniotirnggroup: custom.metrics.k8s.ioversion: v1beta2insecureSkipTLSVerify: truegroupPriorityMinimum: 100versionPriority: 200 --- apiVersion: apiregistration.k8s.io/v1beta1 kind: APIService metadata:name: v1beta1.external.metrics.k8s.io spec:service:name: prometheus-adapternamespace: moniotirnggroup: external.metrics.k8s.ioversion: v1beta1insecureSkipTLSVerify: truegroupPriorityMinimum: 100versionPriority: 100

當配置了完整的custom-metrics-apiservice.yaml后,執行kc top命令就正常了。

參考

https://github.com/stefanprodan/k8s-prom-hpa

https://blog.csdn.net/weixin_38320674/article/details/105460033
https://www.cnblogs.com/yuhaohao/p/14109787.html
https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/

總結

以上是生活随笔為你收集整理的prometheus-adapter自定义hpa的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。