K8S集群Master高可用实践
本文將在前文基礎上介紹k8s集群的高可用實踐,一般來講,k8s集群高可用主要包含以下幾個內(nèi)容:
1、etcd集群高可用
2、集群dns服務高可用
3、kube-apiserver、kube-controller-manager、kube-scheduler等master組件的高可用
其中etcd實現(xiàn)的辦法較為容易,具體實現(xiàn)辦法可參考前文:
http://blog.51cto.com/ylw6006/2095871
集群dns服務高可用,可以通過配置dns的pod副本數(shù)為2,通過配置label實現(xiàn)2個副本運行在在不同的節(jié)點上實現(xiàn)高可用。
kube-apiserver服務的高可用,可行的方案較多,具體介紹可參考文檔:
https://jishu.io/kubernetes/kubernetes-master-ha/
kube-controller-manager、kube-scheduler等master組件的高可用相對容易實現(xiàn),運行多份實例即可。
一、環(huán)境介紹
master節(jié)點1: 192.168.115.5/24 主機名:vm1
master節(jié)點2: 192.168.115.6/24 主機名:vm2
VIP地址: 192.168.115.4/24 (使用keepalived實現(xiàn))
Node節(jié)點1: 192.168.115.6/24 主機名:vm2
Node節(jié)點2: 192.168.115.7/24 主機名:vm3
操作系統(tǒng)版本:centos 7.2 64bit
K8s版本:1.9.6 二進制部署
本文演示環(huán)境是在前文的基礎上,已有k8s集群(1個master節(jié)點、2個node節(jié)點上),實現(xiàn)k8s集群master組件的高可用,關于k8s環(huán)境的部署請參考前文鏈接!
1、配置Etcd集群和TLS認證 ——>?http://blog.51cto.com/ylw6006/2095871
2、Flannel網(wǎng)絡組件部署 ——>?http://blog.51cto.com/ylw6006/2097303
3、升級Docker服務 ——>?http://blog.51cto.com/ylw6006/2103064
4、K8S二進制部署Master節(jié)點 ——>?http://blog.51cto.com/ylw6006/2104031
5、K8S二進制部署Node節(jié)點 ——>?http://blog.51cto.com/ylw6006/2104692
二、證書更新
在vm1節(jié)點上完成證書的更新,重點是要把master相關ip全部全部加入到列表里面
# mkdir api-ha && cd api-ha # cat k8s-csr.json {"CN": "kubernetes","hosts": ["127.0.0.1","192.168.115.4","192.168.115.5","192.168.115.6","10.254.0.1","kubernetes","kubernetes.default","kubernetes.default.svc","kubernetes.default.svc.cluster","kubernetes.default.svc.cluster.local"],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "FuZhou","L": "FuZhou","O": "k8s","OU": "System"}] }# cfssl gencert -ca=/etc/ssl/etcd/ca.pem \-ca-key=/etc/ssl/etcd/ca-key.pem \-config=/etc/ssl/etcd/ca-config.json \-profile=kubernetes k8s-csr.json | cfssljson -bare kubernetes# mv *.pem /etc/kubernetes/ssl/三、配置master組件
1、復制vm1的kube-apiserver、kube-controller-manager、kube-scheduler文件到vm2節(jié)點上
# cd /usr/local/sbin # scp -rp kube-apiserver kube-controller-manager kube-scheduler vm2:/usr/local/sbin/2、復制vm1的證書文件到vm2節(jié)點上
# cd /etc/kubernetes/ssl # scp -rp ./* vm2:/etc/kubernetes/ssl3、配置服務并啟動服務
# cat /usr/lib/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes API Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=network.target[Service] ExecStart=/usr/local/sbin/kube-apiserver \--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \--advertise-address=0.0.0.0 \--bind-address=0.0.0.0 \--insecure-bind-address=127.0.0.1 \--authorization-mode=RBAC \--runtime-config=rbac.authorization.k8s.io/v1alpha1 \--kubelet-https=true \--enable-bootstrap-token-auth=true \--token-auth-file=/etc/kubernetes/token.csv \--service-cluster-ip-range=10.254.0.0/16 \--service-node-port-range=1024-65535 \--tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem \--tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \--client-ca-file=/etc/ssl/etcd/ca.pem \--service-account-key-file=/etc/ssl/etcd/ca-key.pem \--etcd-cafile=/etc/ssl/etcd/ca.pem \--etcd-certfile=/etc/ssl/etcd/server.pem \--etcd-keyfile=/etc/ssl/etcd/server-key.pem \--etcd-servers=https://192.168.115.5:2379,https://192.168.115.6:2379,https://192.168.115.7:2379 \--enable-swagger-ui=true \--allow-privileged=true \--apiserver-count=3 \--audit-log-maxage=30 \--audit-log-maxbackup=3 \--audit-log-maxsize=100 \--audit-log-path=/var/lib/audit.log \--event-ttl=1h \--v=2 Restart=on-failure RestartSec=5 Type=notify LimitNOFILE=65536[Install] WantedBy=multi-user.target # cat /usr/lib/systemd/system/kube-scheduler.service [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/GoogleCloudPlatform/kubernetes[Service] ExecStart=/usr/local/sbin/kube-scheduler \--address=127.0.0.1 \--master=http://127.0.0.1:8080 \--leader-elect=true \--v=2 Restart=on-failure RestartSec=5[Install] WantedBy=multi-user.target # cat /usr/lib/systemd/system/kube-controller-manager.service [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/GoogleCloudPlatform/kubernetes[Service] ExecStart=/usr/local/sbin/kube-controller-manager \--address=127.0.0.1 \--master=http://127.0.0.1:8080 \--allocate-node-cidrs=true \--service-cluster-ip-range=10.254.0.0/16 \--cluster-cidr=172.30.0.0/16 \--cluster-name=kubernetes \--cluster-signing-cert-file=/etc/ssl/etcd/ca.pem \--cluster-signing-key-file=/etc/ssl/etcd/ca-key.pem \--service-account-private-key-file=/etc/ssl/etcd/ca-key.pem \--root-ca-file=/etc/ssl/etcd/ca.pem \--leader-elect=true \--v=2 Restart=on-failure RestartSec=5[Install] WantedBy=multi-user.target # systemctl enable kube-apiserver # systemctl enable kube-controller-manager # systemctl enable kube-scheduler # systemctl start kube-apiserver # systemctl start kube-controller-manager # systemctl start kube-scheduler注意:
vm1上的api-server配置文件需要將--advertise-address、--bind-address兩個參數(shù)修改為全網(wǎng)監(jiān)聽
四、安裝和配置keepalived
# yum -y install keepalived # cat /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { notification_email { ylw@fjhb.cn} notification_email_from admin@fjhb.cnsmtp_server 127.0.0.1 smtp_connect_timeout 30 router_id LVS_MASTER } vrrp_script check_apiserver {script "/etc/keepalived/check_apiserver.sh"interval 3 } vrrp_instance VI_1 { state MASTERinterface ens33virtual_router_id 60 priority 100 advert_int 1 authentication { auth_type PASS auth_pass k8s.59iedu.com} virtual_ipaddress { 192.168.115.4/24}track_script { check_apiserver} } # cat /usr/lib/systemd/system/keepalived.service [Unit] Description=LVS and VRRP High Availability Monitor After=syslog.target network-online.target kube-apiserver.service Require=kube-apiserver.service[Service] Type=forking PIDFile=/var/run/keepalived.pid KillMode=process EnvironmentFile=-/etc/sysconfig/keepalived ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS ExecReload=/bin/kill -HUP $MAINPID[Install] WantedBy=multi-user.target注意:
vm2節(jié)點上需要修改state為BACKUP, priority為99 (priority值必須小于master節(jié)點配置值)
# cat /etc/keepalived/check_apiserver.sh #!/bin/bash flag=$(systemctl status kube-apiserver &> /dev/null;echo $?) if [[ $flag != 0 ]];thenecho "kube-apiserver is down,close the keepalived"systemctl stop keepalived fi # chmod +x /etc/keepalived/check_apiserver.sh # systemctl daemon-reload # systemctl enable keepalived # systemctl start keepalived五、修改客戶端配置
1、Kubelet.kubeconfig 、bootstrap.kubeconfig、kube-proxy.kubeconfig 配置
# grep 'server' /etc/kubernetes/kubelet.kubeconfig server: https://192.168.115.4:6443 # grep 'server' /etc/kubernetes/bootstrap.kubeconfig server: https://192.168.115.4:6443 # grep 'server' /etc/kubernetes/kube-proxy.kubeconfig server: https://192.168.115.4:64432、confing配置
# grep 'server' /root/.kube/config server: https://192.168.115.4:64433、重啟客戶端服務
# systemctl restart kubelet # systemctl restart kube-proxy六、測試
1、關閉服務前的集群狀態(tài),VIP在vm1節(jié)點上
2、在vm1上將kube-apiserver服務停止,可以看到VIP消息,但任何可以連接master獲取pod信息
日志顯示vip被自動移除
3、在vm2上能看到自動注冊上了VIP,且kubectl客戶端連接正常
4、在vm1上將kube-apiserver、keepalived服務啟動,由于配置的是主從模式,所以會搶占VIP
5、在vm2上可以看到VIP的釋放,keepalived重新進入backup狀態(tài)
6、在整個過程中可以用其他的客戶端來連接master VIP來測試服務器的連續(xù)性
七、使用haproxy改進
只用keepalived實現(xiàn)master ha,當api-server的訪問量大的時候,會有性能瓶頸問題,通過配置haproxy,可以同時實現(xiàn)master的ha和流量的負載均衡。
1、安裝和配置haproxy,兩臺master做同樣的配置
2、修改kube-apiserver配置,ip地址根據(jù)實際情況修改
# grep 'address' /usr/lib/systemd/system/kube-apiserver.service --advertise-address=192.168.115.5 \--bind-address=192.168.115.5 \--insecure-bind-address=127.0.0.1 \3、修改keepalived啟動腳本和配置文件,vrrp腳本的ip地址根據(jù)實際情況修改
# cat /usr/lib/systemd/system/keepalived.service [Unit] Description=LVS and VRRP High Availability Monitor After=syslog.target network-online.target Require=haproxy.service ########以下輸出省略######### # cat /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { notification_email { ylw@fjhb.cn} notification_email_from admin@fjhb.cn smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id LVS_MASTER } vrrp_script check_apiserver {script "curl -o /dev/null -s -w %{http_code} -k https://192.168.115.5:6443"interval 3timeout 3fall 2rise 2 } ########以下輸出省略#########4、修改kubelet和kubectl客戶端配置文件,指向haproxy的端口8443
# grep '192' /etc/kubernetes/bootstrap.kubeconfig server: https://192.168.115.4:8443 # grep '192' /etc/kubernetes/kubelet.kubeconfig server: https://192.168.115.4:8443 # grep '192' /etc/kubernetes/kube-proxy.kubeconfig server: https://192.168.115.4:8443 # grep '192' /root/.kube/config server: https://192.168.115.4:84435、重啟服務驗證
master
kubelet
# systemctl restart kubelet # systemctl restart kube-proxy總結(jié)
以上是生活随笔為你收集整理的K8S集群Master高可用实践的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: JS的原型链和继承
- 下一篇: 升级MariaDB为10.1版本