日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 运维知识 > linux >内容正文

linux

linux12k8s --> 12kubeadm部署高可用k8s

發(fā)布時(shí)間:2023/12/20 linux 30 豆豆
生活随笔 收集整理的這篇文章主要介紹了 linux12k8s --> 12kubeadm部署高可用k8s 小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

文章目錄

    • KubeAdmin安裝k8s
        • 1、集群類型
        • 2、安裝方式
        • 3、高可用架構(gòu)圖
      • 一、準(zhǔn)備環(huán)境 (電腦系統(tǒng)16G+)
        • 1、部署軟件、系統(tǒng)要求
      • 2、節(jié)點(diǎn)規(guī)劃
      • 二、kubeadm安裝k8s
        • 1、內(nèi)核優(yōu)化腳本(所有機(jī)器)
        • 2、 免密腳本(所有機(jī)器)
        • 3、安裝IPVS和內(nèi)核優(yōu)化(所有機(jī)器)
        • 4、安裝Docker(所有機(jī)器)
          • 1、docker安裝腳本
          • 2、docker卸載
        • 5、安裝kubernetes(所有機(jī)器)
        • 6、對kubeadmin做高可用
          • 1、安裝高可用軟件(所有master節(jié)點(diǎn))
          • 2、修改keepalived配置文件(所有master節(jié)點(diǎn))
          • 3、修改haproxy配置文件(所有master節(jié)點(diǎn))
        • 7、m01主節(jié)點(diǎn)初始化配置
          • 1、查看kubernetes所需要的鏡像
          • 2、部署m01主節(jié)點(diǎn)
          • 3、故障排除
        • 8、kubernetes網(wǎng)絡(luò)插件calico
          • 1、安裝集群網(wǎng)絡(luò)插件(主節(jié)點(diǎn))
          • 2、安裝 Calico 網(wǎng)絡(luò)清單
          • 3、檢查集群初始化狀態(tài)
      • 三、安裝集群圖形化界面(Dashboard )
        • 1、安裝圖形化界面

KubeAdmin安裝k8s

1、集群類型

# kubernetes集群大體上分為兩類: 一主多從和多主多從 # 1、一主多從: 一臺(tái) Master節(jié)點(diǎn)和多臺(tái)Node節(jié)點(diǎn),搭建簡單,有單機(jī)故障分析,適合于測試環(huán)境 # 2、多主多從: 多臺(tái) Master節(jié)點(diǎn)和多臺(tái)Node節(jié)點(diǎn),搭建麻煩,安全性比較高,適合于生產(chǎn)環(huán)境

2、安裝方式

官方地址:https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/

# 方式一: kubeadmKubeadm 是一個(gè)K8s 部署工具,提供kubeadm init 和kubeadm join,用于快速部署Kubernetes 集群。# 方式二:二進(jìn)制包從github 下載發(fā)行版的二進(jìn)制包,手動(dòng)部署每個(gè)組件,組成Kubernetes 集群。 Kubeadm 降低部署門檻,但屏蔽了很多細(xì)節(jié),遇到問題很難排查。如果想更容易可控,推薦使用二進(jìn)制包部署Kubernetes 集群,雖然手動(dòng)部署麻煩點(diǎn),期間可以學(xué)習(xí)很多工作原理,也利于后期維護(hù)。

3、高可用架構(gòu)圖

一、準(zhǔn)備環(huán)境 (電腦系統(tǒng)16G+)

1、部署軟件、系統(tǒng)要求

軟件版本
CentosCentOS Linux release 7.5及以上
Docker19.03.12
KubernetesV0.13.0
FlannelV1.19.1
Kernel-lmkernel-lt-4.4.245-1.el7.elrepo.x86_64.rpm
Kernel-lm-devekernel-lt-devel-4.4.245-1.el7.elrepo.x86_64.rpm

2、節(jié)點(diǎn)規(guī)劃

  • IP建議采用192網(wǎng)段,避免與kubernetes內(nèi)網(wǎng)沖突
準(zhǔn)備機(jī)器IP配置系統(tǒng)內(nèi)核版本
k8s-master1192.168.15.1112核2G4.4+
k8s-master2192.168.15.1122核2G4.4+
**k8s-master3192.168.15.1132核2G4.4+
k8s-node1**192.168.15.1142核2G4.4+
k8s-node2**192.168.15.1152核2G4.4+

二、kubeadm安裝k8s

服務(wù)器配置至少是2G2核的。如果不是則可以在集群初始化后面增加 --ignore-preflight-errors=NumCPU

1、內(nèi)核優(yōu)化腳本(所有機(jī)器)

[root@k8s-m-01 ~]# vim base.sh #!/bin/bash # 1、修改主機(jī)名和網(wǎng)卡 hostnamectl set-hostname $1 &&\ sed -i "s#111#$2#g" /etc/sysconfig/network-scripts/ifcfg-eth[01] &&\ systemctl restart network &&\ # 2、關(guān)閉selinux和防火墻和ssh連接 setenforce 0 &&\ sed -i 's#enforcing#disabled#g' /etc/selinux/config &&\ systemctl disable --now firewalld &&\ # 如果iptables沒有安裝就不需要執(zhí)行 # systemctl disable --now iptables &&\ sed -i 's/#UseDNS yes/UseDNS no/g' /etc/ssh/sshd_config &&\ systemctl restart sshd &&\ # 3、關(guān)閉swap分區(qū) # 一旦觸發(fā) swap,會(huì)導(dǎo)致系統(tǒng)性能急劇下降,所以一般情況下,K8S 要求關(guān)閉 swap # cat /etc/fstab # 注釋最后一行swap,如果沒有安裝swap就不需要 swapoff -a &&\ #忽略swap echo 'KUBELET_EXTRA_ARGS="--fail-swap-on=false"' > /etc/sysconfig/kubelet &&\ # 4、修改本機(jī)hosts文件 cat >>/etc/hosts <<EOF 192.168.15.111 k8s-m-01 m1 192.168.15.112 k8s-n-01 n1 192.168.15.113 k8s-n-02 n2 EOF# 5、配置鏡像源(國內(nèi)源) # 默認(rèn)情況下,CentOS 使用的是官方 yum 源,所以一般情況下在國內(nèi)使用是非常慢,所以我們可以替換成 國內(nèi)的一些比較成熟的 yum 源,例如:清華大學(xué)鏡像源,網(wǎng)易云鏡像源等等 rm -rf /ect/yum.repos.d/* &&\ curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo &&\ curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo &&\ yum clean all &&\ yum makecache &&\ # 6、更新系統(tǒng) #查看內(nèi)核版本,若內(nèi)核高于4.0,可不加--exclud選項(xiàng) yum update -y --exclud=kernel* &&\ # 由于 Docker 運(yùn)行需要較新的系統(tǒng)內(nèi)核功能,例如 ipvs 等等,所以一般情況下,我們需要使用 4.0+以上版 本的系統(tǒng)內(nèi)核要求是 4.18+,如果是 CentOS 8 則不需要內(nèi)核系統(tǒng)更新 # 7、安裝基礎(chǔ)常用軟件,是為了方便我們的日常使用 yum install wget expect vim net-tools ntp bash-completion ipvsadm ipset jq iptables conntrack sysstat libseccomp ntpdate -y &&\ # 8、更新系統(tǒng)內(nèi)核 #如果是centos8則不需要升級內(nèi)核 cd /opt/ &&\ wget https://elrepo.org/linux/kernel/el7/x86_64/RPMS/kernel-lt-5.4.137-1.el7.elrepo.x86_64.rpm &&\ wget https://elrepo.org/linux/kernel/el7/x86_64/RPMS/kernel-lt-devel-5.4.137-1.el7.elrepo.x86_64.rpm &&\ # 如果內(nèi)核低于4.0會(huì)有一些bug,導(dǎo)致生產(chǎn)環(huán)境中,如果流量很大的時(shí)候,會(huì)出現(xiàn)流量抖動(dòng)現(xiàn)象 # 官網(wǎng)https://elrepo.org/linux/kernel/el7/x86_64/RPMS/# 9、安裝系統(tǒng)內(nèi)容 yum localinstall /opt/kernel-lt* -y &&\ # 10、調(diào)到默認(rèn)啟動(dòng) grub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfg &&\ # 11、查看當(dāng)前默認(rèn)啟動(dòng)的內(nèi)核 grubby --default-kernel &&\ reboot # 安裝完成就是5.4內(nèi)核

2、 免密腳本(所有機(jī)器)

# 1、免密 [root@k8s-master-01 ~]# ssh-keygen -t rsa [root@k8s-master-01 ~]# for i in m1 m2 m3 n1 n2;do ssh-copy-id -i ~/.ssh/id_rsa.pub root@$i;done# 在集群當(dāng)中,時(shí)間是一個(gè)很重要的概念,一旦集群當(dāng)中某臺(tái)機(jī)器時(shí)間跟集群時(shí)間不一致,可能會(huì)導(dǎo)致集群面 臨很多問題。所以,在部署集群之前,需要同步集群當(dāng)中的所有機(jī)器的時(shí)間 方式一:時(shí)間同步ntpdate # 2、時(shí)間同步寫入定時(shí)任務(wù) crontab -e # 每隔5分鐘刷新一次 */5 * * * * /usr/sbin/ntpdate ntp.aliyun.com &> /dev/null 方式二:時(shí)間同步chrony [root@k8s-m-01 ~]# yum -y install chrony [root@k8s-m-01 ~]# systemctl enable --now chronyd [root@k8s-m-01 ~]# date #三臺(tái)機(jī)器時(shí)間是否一樣 Mon Aug 2 10:44:18 CST 2021

3、安裝IPVS和內(nèi)核優(yōu)化(所有機(jī)器)

kubernetes中service有兩種代理模式,一種是iptables,一種是ipvs

兩者相比,ipvs性能高,但是如果使用,需要手動(dòng)加載ipvs模塊

# 1、安裝 IPVS 、加載 IPVS 模塊 (所有節(jié)點(diǎn)) [root@k8s-m-01 ~]# yum install ipset ipvsadm #如果沒有下載這2個(gè)命令 ipvs 是系統(tǒng)內(nèi)核中的一個(gè)模塊,其網(wǎng)絡(luò)轉(zhuǎn)發(fā)性能很高。一般情況下,我們首選 ipvs [root@k8s-n-01 ~]# vim /etc/sysconfig/modules/ipvs.modules #!/bin/bash ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack" for kernel_module in ${ipvs_modules}; do /sbin/modinfo -F filename ${kernel_module} > /dev/null 2>&1 if [ $? -eq 0 ]; then /sbin/modprobe ${kernel_module} fi done # 2、授權(quán)(所有節(jié)點(diǎn)) [root@k8s-n-01 ~]# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs# 3、內(nèi)核參數(shù)優(yōu)化(所有節(jié)點(diǎn)) 加載IPVS 模塊、生效配置 內(nèi)核參數(shù)優(yōu)化的主要目的是使其更適合 kubernetes 的正常運(yùn)行 [root@k8s-n-01 ~]# vim /etc/sysctl.d/k8s.conf net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-iptables = 1 # 可以之間修改這兩個(gè) net.bridge.bridge-nf-call-ip6tables = 1 # 可以之間修改這兩個(gè) fs.may_detach_mounts = 1 vm.overcommit_memory=1 # 不檢查物理內(nèi)存是否夠用 vm.swappiness=0 # 禁止使用 swap 空間,只有當(dāng)系統(tǒng) OOM 時(shí)才允許使用它 vm.panic_on_oom=0 # 開啟 OOM fs.inotify.max_user_watches=89100 fs.file-max=52706963 開啟 OOM fs.nr_open=52706963 net.ipv4.tcp_keepalive_time = 600 net.ipv4.tcp.keepaliv.probes = 3 net.ipv4.tcp_keepalive_intvl = 15 net.ipv4.tcp.max_tw_buckets = 36000 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp.max_orphans = 327680 net.ipv4.tcp_orphan_retries = 3 net.ipv4.tcp_syncookies = 1 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.ip_conntrack_max = 65536 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.top_timestamps = 0 net.core.somaxconn = 16384 # 立即生效 sysctl --system

4、安裝Docker(所有機(jī)器)

1、docker安裝腳本
方式一:華為源 [root@k8s-m-01 ~]# vim docker.sh # 1、清空已安裝的docker sudo yum remove docker docker-common docker-selinux docker-engine &&\ sudo yum install -y yum-utils device-mapper-persistent-data lvm2 &&\ # 2、安裝doceker源 wget -O /etc/yum.repos.d/docker-ce.repo https://repo.huaweicloud.com/docker-ce/linux/centos/docker-ce.repo &&\ # 3、軟件倉庫地址替換 sudo sed -i 's+download.docker.com+repo.huaweicloud.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo &&\ # 4、重新生成源 yum clean all &&\ yum makecache &&\ # 5、安裝docker sudo yum makecache fast &&\ sudo yum install docker-ce -y &&\ # 6、設(shè)置docker開機(jī)自啟動(dòng) systemctl enable --now docker.service# 7、創(chuàng)建docker目錄、啟動(dòng)服務(wù)(所有節(jié)點(diǎn)) ------ 單獨(dú)執(zhí)行加速docekr運(yùn)行速度 sudo mkdir -p /etc/docker sudo tee /etc/docker/daemon.json <<-'EOF' {"registry-mirrors": ["https://k7eoap03.mirror.aliyuncs.com"] } EOF 方式二:阿里云 [root@k8s-n-01 ~]# vim docker.sh # step 1: 安裝必要的一些系統(tǒng)工具 sudo yum install -y yum-utils device-mapper-persistent-data lvm2 &&\ # Step 2: 添加軟件源信息 sudo yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo &&\ # Step 3 sudo sed -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo &&\ # Step 4: 更新并安裝Docker-CE sudo yum makecache fast &&\ sudo yum -y install docker-ce &&\ # Step 4: 開啟Docker服務(wù) systemctl enable --now docker.service &&\ # Step 5: Docker加速優(yōu)化服務(wù) sudo mkdir -p /etc/docker sudo tee /etc/docker/daemon.json <<-'EOF' {"registry-mirrors": ["https://k7eoap03.mirror.aliyuncs.com"] } EOF
2、docker卸載
# 1、卸載舊的版本sudo yum remove docker \docker-client \docker-client-latest \docker-common \docker-latest \docker-latest-logrotate \docker-logrotate \docker-engine #2.卸載依賴 yum remove docker-ce docker-ce-cli containerd.io -y #3.刪除目錄 rm -rf /var/lib/docker #docker默認(rèn)的工作路徑 #4.鏡像加速器(docker優(yōu)化)- 登錄阿里云找到容器鏡像服務(wù)- 找到鏡像加速地址- 配置使用

5、安裝kubernetes(所有機(jī)器)

#1、阿里源kubernetes [root@k8s-n-02 yum.repos.d]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF # 2、下載最新版本 yum install -y kubelet kubeadm kubectl # 版本是kubelet-1.21.3 yum install kubectl-1.21.3 kubeadm-1.21.3 kubelet-1.21.3 -y # 3、此時(shí)只需開機(jī)自啟,無需啟動(dòng),因?yàn)檫€未初始化 systemctl enable --now kubelet.service # 4、查看版本 [root@k8s-m-01 ~]# kubectl version Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.3",

6、對kubeadmin做高可用

1、安裝高可用軟件(所有master節(jié)點(diǎn))
# 負(fù)載均衡器有很多種,只要能實(shí)現(xiàn)api-server高可用都行# 官方推薦: keeplived + haproxy[root@k8s-m-01 ~]# yum install -y keepalived haproxy
2、修改keepalived配置文件(所有master節(jié)點(diǎn))
# 1、根據(jù)節(jié)點(diǎn)的不同,修改的配置也不同 mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf_bak cd /etc/keepalivedKUBE_APISERVER_IP=`hostname -i`cat > /etc/keepalived/keepalived.conf <<EOF ! Configuration File for keepalived global_defs {router_id LVS_DEVEL } vrrp_script chk_kubernetes {script "/etc/keepalived/check_kubernetes.sh"interval 2weight -5fall 3rise 2 } vrrp_instance VI_1 {state MASTER # m2、m3節(jié)點(diǎn)改成BACKUPinterface eth1mcast_src_ip ${KUBE_APISERVER_IP}virtual_router_id 51priority 100 # 權(quán)重 m2改成90 m3改成80advert_int 2authentication {auth_type PASSauth_pass K8SHA_KA_AUTH}virtual_ipaddress {172.16.1.116} } EOF # 2、加載keepalived并啟動(dòng) [root@k8s-m-01 keepalived]# systemctl daemon-reload [root@k8s-m-01 /etc/keepalived]# systemctl enable --now keepalived # 4、驗(yàn)證keepalived是否啟動(dòng) [root@k8s-m-01 keepalived]# systemctl status keepalived.service ● keepalived.service - LVS and VRRP High Availability MonitorLoaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled)Active: active (running) since Sun 2021-08-01 14:48:23 CST; 27s ago [root@k8s-m-01 keepalived]# ip a |grep 116inet 172.16.1.116/32 scope global eth1
3、修改haproxy配置文件(所有master節(jié)點(diǎn))
# 1、高可用軟件 --->是做負(fù)載均衡 向后負(fù)載均衡會(huì)用SLB [root@k8s-m-01 keepalived]# vim /etc/haproxy/haproxy.cfg globalmaxconn 2000ulimit-n 16384log 127.0.0.1 local0 errstats timeout 30sdefaultslog globalmode httpoption httplogtimeout connect 5000timeout client 50000timeout server 50000timeout http-request 15stimeout http-keep-alive 15sfrontend monitor-inbind *:33305mode httpoption httplogmonitor-uri /monitorlisten statsbind *:8006mode httpstats enablestats hide-versionstats uri /statsstats refresh 30sstats realm Haproxy\ Statisticsstats auth admin:adminfrontend k8s-masterbind 0.0.0.0:8443bind 127.0.0.1:8443mode tcpoption tcplogtcp-request inspect-delay 5sdefault_backend k8s-masterbackend k8s-mastermode tcpoption tcplogoption tcp-checkbalance roundrobindefault-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100server k8s-m-01 172.16.1.111:6443 check inter 2000 fall 2 rise 2 weight 100server k8s-m-02 172.16.1.112:6443 check inter 2000 fall 2 rise 2 weight 100server k8s-m-03 172.16.1.113:6443 check inter 2000 fall 2 rise 2 weight 100# 2、啟動(dòng)haproxy [root@k8s-m-01 keepalived]# systemctl daemon-reload [root@k8s-m-01 /etc/keepalived]# systemctl enable --now haproxy.service # 3、檢查集群狀態(tài) [root@k8s-m-01 keepalived]# systemctl status haproxy.service ● haproxy.service - HAProxy Load BalancerLoaded: loaded (/usr/lib/systemd/system/haproxy.service; enabled; vendor preset: disabled)Active: active (running) since Fri 2021-07-16 21:12:00 CST; 27s agoMain PID: 4997 (haproxy-systemd)

7、m01主節(jié)點(diǎn)初始化配置

1、查看kubernetes所需要的鏡像
# 1、查看鏡像列表 [root@k8s-m-01 ~]# kubeadm config images list k8s.gcr.io/kube-apiserver:v1.21.3 k8s.gcr.io/kube-controller-manager:v1.21.3 k8s.gcr.io/kube-scheduler:v1.21.3 k8s.gcr.io/kube-proxy:v1.21.3 k8s.gcr.io/pause:3.4.1 k8s.gcr.io/etcd:3.4.13-0 k8s.gcr.io/coredns/coredns:v1.8.0 quay.io/coreos/flannel:v0.14.0 # 2、查看阿里云鏡像列表 [root@k8s-m-01 ~]# kubeadm config images list --image-repository=registry.cn-shanghai.aliyuncs.com/mmk8s registry.cn-shanghai.aliyuncs.com/mmk8s/kube-apiserver:v1.21.3 registry.cn-shanghai.aliyuncs.com/mmk8s/kube-controller-manager:v1.21.3 registry.cn-shanghai.aliyuncs.com/mmk8s/kube-scheduler:v1.21.3 registry.cn-shanghai.aliyuncs.com/mmk8s/kube-proxy:v1.21.3 registry.cn-shanghai.aliyuncs.com/mmk8s/pause:3.4.1 registry.cn-shanghai.aliyuncs.com/mmk8s/etcd:3.4.13-0 registry.cn-shanghai.aliyuncs.com/mmk8s/coredns:v1.8.0
2、部署m01主節(jié)點(diǎn)
# 1、生成初始化配置文件 [root@k8s-m-01 ~]# kubeadm config print init-defaults >init-config.yaml# 2、修改init-config.yaml文件 [root@k8s-m-01 ~]# vim init-config.yaml apiVersion: kubeadm.k8s.io/v1beta2 bootstrapTokens: - groups:- system:bootstrappers:kubeadm:default-node-tokentoken: abcdef.0123456789abcdef # token每個(gè)人都不一樣ttl: 24h0m0susages:- signing- authentication kind: InitConfiguration localAPIEndpoint:advertiseAddress: 172.16.1.111 # 當(dāng)前的主機(jī)ipbindPort: 6443 nodeRegistration:criSocket: /var/run/dockershim.sockname: k8s-m-01 # 對應(yīng)的主機(jī)名taints:- effect: NoSchedulekey: node-role.kubernetes.io/master --- apiServer:cerSANs:- 172.16.1.116 # 高可用的虛擬IPtimeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta2 certificatesDir: /etc/kubernetes/pki controlPlaneEndpoint: 172.16.1.116:8443 # 高可用的虛擬IP clusterName: kubernetes controllerManager: {} dns:type: CoreDNS etcd:local:dataDir: /var/lib/etcd imageRepository: registry.cn-shanghai.aliyuncs.com/baim0os # 可以寫自己的鏡像倉庫 kind: ClusterConfiguration kubernetesVersion: 1.21.3 # 版本號 networking:dnsDomain: cluster.localpodSubnet: 10.244.0.0/16 # 網(wǎng)絡(luò)路由serviceSubnet: 10.96.0.0/12 scheduler: {} # 3、初始化集群 [root@k8s-m-01 ~]# kubeadm init --config init-config.yaml --upload-certs You can now join any number of the control-plane node running the following command on each as root: # 主節(jié)點(diǎn)命令復(fù)制下來kubeadm join 172.16.1.116:8443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:3c24cf3218a243148f20c6804d3766d2b6cd5dadc620313d0cf2dcbfd1626c5d \--control-plane --certificate-key 1e852aa82be85e8b1b4776cce3a0519b1d0b1f76e5633e5262e2436e8f165993 # 從節(jié)點(diǎn)命令復(fù)制下來 Then you can join any number of worker nodes by running the following on each as root:kubeadm join 172.16.1.116:8443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:3c24cf3218a243148f20c6804d3766d2b6cd5dadc620313d0cf2dcbfd1626c5d # 4、主節(jié)點(diǎn)創(chuàng)建集群 node節(jié)點(diǎn)要查看token,主節(jié)點(diǎn)生成token可重復(fù)執(zhí)行查看,不會(huì)改變 [root@k8s-m-01 ~]# kubeadm token create --print-join-command kubeadm join 172.16.1.116:8443 --token pfu0ek.ndis39t916v9clq1 --discovery-token-ca-cert-hash sha256:3c24cf3218a243148f20c6804d3766d2b6cd5dadc620313d0cf2dcbfd1626c5d # 5、 初始化完成查看kubernetes [root@k8s-m-01 ~]# systemctl restart kubelet.service# 6、配置 kubernetes 用戶信息(master01節(jié)點(diǎn)執(zhí)行) [root@k8s-m-01 ~]# kubectl label nodes k8s-n-01 node-role.kubernetes.io/node=n01 node/k8s-n-01 labeled [root@k8s-m-01 ~]# kubectl label nodes k8s-n-02 node-role.kubernetes.io/node=n02 node/k8s-n-02 labeled [root@k8s-m-01 ~]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-m-01 Ready control-plane,master 73m v1.21.3 k8s-m-02 Ready control-plane,master 63m v1.21.3 k8s-m-03 Ready control-plane,master 63m v1.21.3 k8s-n-01 Ready node 2m40s v1.21.3 k8s-n-02 Ready node 62m v1.21.3# 6、建立用戶集群權(quán)限 mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config# 7、如果使用root用戶,則添加至環(huán)境變量 (選做) # 臨時(shí)生效 [root@k8s-m-01 ~]# export KUBECONFIG=/etc/kubernetes/admin.conf # 永久生效 [root@k8s-m-01 ~]# vim /etc/profile.d/kubernetes.shexport KUBECONFIG=/etc/kubernetes/admin.conf [root@k8s-m-01 ~]# source /etc/profile# 8、增加命令提示 (所以節(jié)點(diǎn)都執(zhí)行) 所有節(jié)點(diǎn)執(zhí)行 yum install -y bash-completion source /usr/share/bash-completion/bash_completion source <(kubectl completion bash) echo "source <(kubectl completion bash)" >> ~/.bashrc
3、故障排除
# 1、從節(jié)點(diǎn)加入集群可能會(huì)出現(xiàn)如下報(bào)錯(cuò): [ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1 [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...` To see the stack trace of this error execute with --v=5 or higherPS:前提安裝Docker+啟動(dòng),再次嘗試加入節(jié)點(diǎn)! # 1、報(bào)錯(cuò)原因: swap沒關(guān),一旦觸發(fā) swap,會(huì)導(dǎo)致系統(tǒng)性能急劇下降,所以一般情況下,所以K8S 要求關(guān)閉 swap # 2、解決方法: 1> 執(zhí)行以下三條命令后再次執(zhí)行添加到集群命令: modprobe br_netfilter echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables echo 1 > /proc/sys/net/ipv4/ip_forward# 2、STATUS 狀態(tài)是Healthy [root@k8s-m-01 ~]# kubectl get cs Warning: v1 ComponentStatus is deprecated in v1.19+ NAME STATUS MESSAGE ERROR scheduler Unhealthy Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused controller-manager Unhealthy Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused etcd-0 Healthy {"health":"true"} 1、解決方式 [root@k8s-m-01 ~]# vim /etc/kubernetes/manifests/kube-controller-manager.yaml #- --port=0 [root@k8s-m-01 ~]# vim /etc/kubernetes/manifests/kube-scheduler.yaml #- --port=0[root@k8s-m-01 ~]# systemctl restart kubelet.service2、查看狀態(tài) [root@k8s-m-01 ~]# kubectl get cs Warning: v1 ComponentStatus is deprecated in v1.19+ NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health":"true"}

8、kubernetes網(wǎng)絡(luò)插件calico

Calico是一個(gè)純?nèi)龑拥膮f(xié)議,為OpenStack虛機(jī)和Docker容器提供多主機(jī)間通信。Calico不使用重疊網(wǎng)絡(luò)比如flannel和libnetwork重疊網(wǎng)絡(luò)驅(qū)動(dòng),它是一個(gè)純?nèi)龑拥姆椒?#xff0c;使用虛擬路由代替虛擬交換,每一臺(tái)虛擬路由通過BGP協(xié)議傳播可達(dá)信息(路由)到剩余數(shù)據(jù)中心。
1、安裝集群網(wǎng)絡(luò)插件(主節(jié)點(diǎn))


2、安裝 Calico 網(wǎng)絡(luò)清單
# 1、下載并生成網(wǎng)絡(luò)插件 [root@k8s-m-01 ~]# curl https://docs.projectcalico.org/manifests/calico.yaml -O [root@k8s-m-01 ~]# kubectl apply -f calico.yaml
3、檢查集群初始化狀態(tài)
# 方式一:查看nodes節(jié)點(diǎn) [root@k8s-m-01 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-m-01 Ready control-plane,master 14m v1.21.3 k8s-m-02 Ready control-plane,master 4m43s v1.21.3 k8s-m-03 Ready control-plane,master 4m36s v1.21.3 k8s-n-02 Ready control-plane,node 3m2s v1.21.3 k8s-n-02 Ready control-plane,node 3m2s v1.21.3# 方式二:NDS測試 [root@k8s-m-01 ~]# kubectl run test -it --rm --image=busybox:1.28.3 If you don't see a command prompt, try pressing enter. / # nslookup kubernetes #輸入這條命令,成功后就是以下內(nèi)容 Server: 10.96.0.10 Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.localName: kubernetes Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local / # #出現(xiàn)以上界面成功

三、安裝集群圖形化界面(Dashboard )

Dashboard 是 基 于 網(wǎng) 頁 的 Kubernetes 用 戶 界 面 。 您 可 以 使 用 Dashboard 將 容 器 應(yīng) 用 部 署 到Kubernetes 集群中,也可以對容器應(yīng)用排錯(cuò),還能管理集群本身及其附屬資源。您可以使用 Dashboard 獲取運(yùn)行在集群中的應(yīng)用的概覽信息,也可以創(chuàng)建或者修改 Kubernetes 資源(如Deployment,Job,DaemonSet等等)。

1、安裝圖形化界面

可以對 Deployment 實(shí)現(xiàn)彈性伸縮、發(fā)起滾動(dòng)升級、重啟 Pod 或者使用向?qū)?chuàng)建新的應(yīng)用。

# 1、下載資源清單并生成 方式一:giitubx下載 [root@k8s-m-01 ~]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.4/aio/deploy/recommended.yaml 方式二:自己網(wǎng)站下載并生成 [root@k8s-m-01 ~]# wget http://www.mmin.xyz:81/package/k8s/recommended.yaml [root@k8s-m-01 ~]# kubectl apply -f recommended.yaml 方式三:一步生成并安裝 [root@k8s-m-01 ~]# kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml # 2、查看端口 [root@k8s-m-01 ~]# kubectl get svc -n kubernetes-dashboard NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE dashboard-metrics-scraper ClusterIP 10.109.68.74 <none> 8000/TCP 30s kubernetes-dashboard ClusterIP 10.105.125.10 <none> 443/TCP 34s# 3、開一個(gè)端口,用于訪問 [root@k8s-m-01 ~]# kubectl edit svc -n kubernetes-dashboard kubernetes-dashboard type: ClusterIP => type: NodePort #修改成NodePort# 4、重新查看端口 [root@k8s-m-01 ~]# kubectl get svc -n kubernetes-dashboard NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE dashboard-metrics-scraper ClusterIP 10.96.44.119 <none> 8000/TCP 12m kubernetes-dashboard NodePort 10.96.42.127 <none> 443:40927/TCP 12m# 5、創(chuàng)建token配置文件 [root@k8s-m-01 ~]# vim token.yaml apiVersion: v1 kind: ServiceAccount metadata:name: admin-usernamespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata:name: admin-userannotations:rbac.authorization.kubernetes.io/autoupdate: "true" roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: cluster-admin subjects: - kind: ServiceAccountname: admin-usernamespace: kube-system# 6、部署token到集群 [root@k8s-m-01 ~]# kubectl apply -f token.yaml serviceaccount/admin-user created clusterrolebinding.rbac.authorization.k8s.io/admin-user created# 7、獲取token [root@k8s-m-01 ~]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}') | grep token: | awk '{print $2}' eyJhbGciOiJSUzI1NiIsImtpZCI6Ik1NeTJxSDZmaFc1a00zWVRXTHdQSlZlQnNjWUdQMW1zMjg5OTBZQ1JxNVEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWpxMm56Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIyN2Q4MjIzYi1jYmY1LTQ5ZTUtYjAxMS1hZTAzMzM2MzVhYzQiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.Q4gC_Kr_Ltl_zG0xkhSri7FQrXxdA5Zjb4ELd7-bVbc_9kAe292w0VM_fVJky5FtldsY0XOp6zbiDVCPkmJi9NXT-P09WvPc9g-ISbbQB_QRIWrEWF544TmRSTZJW5rvafhbfONtqZ_3vWtMkCiDsf7EAwDWLLqA5T46bAn-fncehiV0pf0x_X16t72Qqa-aizHBrVcMsXQU0wnYC7jt373pnhnFHYdcJXx_LgHaC1LgCzx5BfkuphiYOaj_dVB6tAlRkQo3QkFP9GIBW3LcVfhOQBmMQl8KeHvBW4QC67PQRv55IUaUDJ_lRC2QKbeJzaUto-ER4YxFwr4tncBwZQ # 8、驗(yàn)證集群是否成功 [root@k8s-m-01 kubernetes]# kubectl run test01 -it --rm --image=busybox:1.28.3 If you don't see a command prompt, try pressing enter. / # nslookup kubernetes Address 1: 10.96.0.2 kube-dns.kube-system.svc.cluster.localName: kubernetes Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local / # 9、通過token訪問 192.168.15.111:40927 # 第五步查看端口




總結(jié)

以上是生活随笔為你收集整理的linux12k8s --> 12kubeadm部署高可用k8s的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。