日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問(wèn) 生活随笔!

生活随笔

當(dāng)前位置: 首頁(yè) > 编程资源 > 编程问答 >内容正文

编程问答

k8s.5-使用RKE构建企业生产级Kubernetes集群

發(fā)布時(shí)間:2024/3/26 编程问答 39 豆豆
生活随笔 收集整理的這篇文章主要介紹了 k8s.5-使用RKE构建企业生产级Kubernetes集群 小編覺(jué)得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

使用RKE構(gòu)建企業(yè)生產(chǎn)級(jí)Kubernetes集群

一、RKE工具介紹

  • RKE是一款經(jīng)過(guò)CNCF認(rèn)證的開(kāi)源Kubernetes發(fā)行版,可以在Docker容器內(nèi)運(yùn)行。

  • 它通過(guò)刪除大部分主機(jī)依賴(lài)項(xiàng),并為部署、升級(jí)和回滾提供一個(gè)穩(wěn)定的路徑,從而解決了Kubernetes最常見(jiàn)的安裝復(fù)雜性問(wèn)題。

  • 借助RKE,Kubernetes可以完全獨(dú)立于正在運(yùn)行的操作系統(tǒng)和平臺(tái),輕松實(shí)現(xiàn)Kubernetes的自動(dòng)化運(yùn)維。

  • 只要運(yùn)行受支持的Docker版本,就可以通過(guò)RKE部署和運(yùn)行Kubernetes。僅需幾分鐘,RKE便可通過(guò)單條命令構(gòu)建一個(gè)集群,其聲明式配置使Kubernetes升級(jí)操作具備原子性且安全。

二、集群主機(jī)準(zhǔn)備

2.1 集群主機(jī)配置要求

2.1.1 部署集群環(huán)境說(shuō)明

部署Kubernetes集群機(jī)器需要滿(mǎn)足以下幾個(gè)條件:

1)一臺(tái)或多臺(tái)機(jī)器,操作系統(tǒng) CentOS7

2)硬件配置:2GB或更多RAM,2個(gè)CPU或更多CPU,硬盤(pán)100GB或更多

3) 集群中所有機(jī)器之間網(wǎng)絡(luò)互通

4)可以訪問(wèn)外網(wǎng),需要拉取鏡像,如果服務(wù)器不能上網(wǎng),需要提前下載鏡像并導(dǎo)入節(jié)點(diǎn)

5)禁止swap分區(qū)

2.1.2 軟件環(huán)境

軟件版本
操作系統(tǒng)CentOS7
docker-ce20.10.12
kubernetes1.22.5

2.1.3 集群主機(jī)名稱(chēng)、IP地址及角色規(guī)劃

主機(jī)名稱(chēng)IP地址角色
master01192.168.10.10controlplane、rancher、rke
master02192.168.10.11controlpane
worker01192.168.10.12worker
worker02192.168.10.13worker
etcd01192.168.10.14etcd

2.2 集群主機(jī)名稱(chēng)配置

所有集群主機(jī)均要配置對(duì)應(yīng)的主機(jī)名稱(chēng)即可。

# hostnamectl set-hostname xxx 把xxx替換為對(duì)應(yīng)的主機(jī)名 192.168.10.10 master01 192.168.10.11 master02 192.168.10.12 worker01 192.168.10.13 worker02 192.168.10.14 etcd01

2.3 集群主機(jī)IP地址配置

所有集群主機(jī)均要配置對(duì)應(yīng)的主機(jī)IP地址即可。

# vim /etc/sysconfig/network-scripts/ifcfg-ens33 # cat /etc/sysconfig/network-scripts/ifcfg-ens33 TYPE="Ethernet" PROXY_METHOD="none" BROWSER_ONLY="no" BOOTPROTO="none" 修改為靜態(tài) DEFROUTE="yes" IPV4_FAILURE_FATAL="no" IPV6INIT="yes" IPV6_AUTOCONF="yes" IPV6_DEFROUTE="yes" IPV6_FAILURE_FATAL="no" IPV6_ADDR_GEN_MODE="stable-privacy" NAME="eth0" DEVICE="eth0" ONBOOT="yes" 添加如下內(nèi)容: IPADDR="192.168.10.XXX" PREFIX="24" GATEWAY="192.168.10.2" DNS1="119.29.29.29"

2.4 主機(jī)名與IP地址解析

所有主機(jī)均要配置。

# vim /etc/hosts # cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.10.10 master01 192.168.10.11 master02 192.168.10.12 worker01 192.168.10.13 worker02 192.168.10.14 etcd01

2.5 配置ip_forward及過(guò)濾機(jī)制

所有主機(jī)均要配置

將橋接的IPv4流量傳遞到iptables的鏈

# vim /etc/sysctl.conf # cat /etc/sysctl.conf net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 # modprobe br_netfilter # sysctl -p /etc/sysctl.conf

2.6 主機(jī)安全設(shè)置

所有主機(jī)均要設(shè)置

2.6.1 防火墻

# systemctl stop firewalld # systemctl disable firewalld # firewall-cmd --state

2.6.2 selinux

修改完成后一定要重啟操作系統(tǒng)

永久關(guān)閉,一定要重啟操作系統(tǒng)后生效。 sed -ri 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config 臨時(shí)關(guān)閉,不重啟操作系統(tǒng),即刻生效。 # setenforce 0

2.7 主機(jī)swap分區(qū)設(shè)置

所有主機(jī)均要配置

永久關(guān)閉,需要重啟操作系統(tǒng)生效。 # sed -ri 's/.*swap.*/#&/' /etc/fstab # cat /etc/fstab...... #/dev/mapper/centos_192-swap swap swap defaults 0 0 臨時(shí)關(guān)閉,不需要重啟操作系統(tǒng),即刻生效。 # swapoff -a

2.8 時(shí)間同步

所有主機(jī)均要配置

# yum -y insall ntpdate # crontab -e 0 */1 * * * ntpdate time1.aliyun.com

三、Docker部署

所有主機(jī)均要配置

3.1 配置Docker YUM源

# wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

3.2 安裝Docker CE

# yum -y install docker-ce

3.3 啟動(dòng)Docker服務(wù)

# systemctl enable docker # systemctl start docker

3.4 配置Docker容器鏡像加速器

# vim /etc/docker/daemon.json # cat /etc/docker/daemon.json {"registry-mirrors": ["https://s27w6kze.mirror.aliyuncs.com"] }

四、docker compose安裝

# curl -L "https://github.com/docker/compose/releases/download/1.28.5/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose # chmod +x /usr/local/bin/docker-compose # ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose # docker-compose --version

五、添加rancher用戶(hù)

使用CentOS時(shí),不能使用 root 賬號(hào),因此要添加專(zhuān)用的賬號(hào)進(jìn)行 docker相關(guān) 操作。

所有集群主機(jī)均需要操作

# useradd rancher # usermod -aG docker rancher # echo 123 | passwd --stdin rancher

六、生成ssh證書(shū)用于部署集群

rke二進(jìn)制文件安裝主機(jī)上創(chuàng)建密鑰,即為control主機(jī),用于部署集群。

6.1 生成ssh證書(shū)

# ssh-keygen

6.2 復(fù)制證書(shū)到集群中所有主機(jī)

# ssh-copy-id rancher@master01 # ssh-copy-id rancher@master02 # ssh-copy-id rancher@worker01 # ssh-copy-id rancher@worker02 # ssh-copy-id rancher@etcd01

6.3 驗(yàn)證ssh證書(shū)是否可用

本次在master01上部署rke二進(jìn)制文件。

在rke二進(jìn)制文件安裝主機(jī)機(jī)測(cè)試連接其它集群主機(jī),驗(yàn)證是否可使用docker ps命令即可。

# ssh rancher@主機(jī)名 遠(yuǎn)程主機(jī)# docker ps

七、rke工具下載

本次在master01上部署rke二進(jìn)制文件。

# wget https://github.com/rancher/rke/releases/download/v1.3.7/rke_linux-amd64 # mv rke_linux-amd64 /usr/local/bin/rke # chmod +x /usr/local/bin/rke # rke --version rke version v1.3.7

八、初始化rke配置文件

# mkdir -p /app/rancher # cd /app/rancher # rke config --name cluster.yml [+] Cluster Level SSH Private Key Path [~/.ssh/id_rsa]: 集群私鑰路徑 [+] Number of Hosts [1]: 3 集群中有3個(gè)節(jié)點(diǎn) [+] SSH Address of host (1) [none]: 192.168.10.10 第一個(gè)節(jié)點(diǎn)IP地址 [+] SSH Port of host (1) [22]: 22 第一個(gè)節(jié)點(diǎn)SSH訪問(wèn)端口 [+] SSH Private Key Path of host (192.168.10.10) [none]: ~/.ssh/id_rsa 第一個(gè)節(jié)點(diǎn)私鑰路徑 [+] SSH User of host (192.168.10.10) [ubuntu]: rancher 遠(yuǎn)程用戶(hù)名 [+] Is host (192.168.10.10) a Control Plane host (y/n)? [y]: y 是否為k8s集群控制節(jié)點(diǎn) [+] Is host (192.168.10.10) a Worker host (y/n)? [n]: n 不是worker節(jié)點(diǎn) [+] Is host (192.168.10.10) an etcd host (y/n)? [n]: n 不是etcd節(jié)點(diǎn) [+] Override Hostname of host (192.168.10.10) [none]: 不覆蓋現(xiàn)有主機(jī)名 [+] Internal IP of host (192.168.10.10) [none]: 主機(jī)局域網(wǎng)IP地址 [+] Docker socket path on host (192.168.10.10) [/var/run/docker.sock]: 主機(jī)上docker.sock路徑 [+] SSH Address of host (2) [none]: 192.168.10.12 第二個(gè)節(jié)點(diǎn) [+] SSH Port of host (2) [22]: 22 遠(yuǎn)程端口 [+] SSH Private Key Path of host (192.168.10.12) [none]: ~/.ssh/id_rsa 私鑰路徑 [+] SSH User of host (192.168.10.12) [ubuntu]: rancher 遠(yuǎn)程訪問(wèn)用戶(hù) [+] Is host (192.168.10.12) a Control Plane host (y/n)? [y]: n 不是控制節(jié)點(diǎn) [+] Is host (192.168.10.12) a Worker host (y/n)? [n]: y 是worker節(jié)點(diǎn) [+] Is host (192.168.10.12) an etcd host (y/n)? [n]: n 不是etcd節(jié)點(diǎn) [+] Override Hostname of host (192.168.10.12) [none]: 不覆蓋現(xiàn)有主機(jī)名 [+] Internal IP of host (192.168.10.12) [none]: 主機(jī)局域網(wǎng)IP地址 [+] Docker socket path on host (192.168.10.12) [/var/run/docker.sock]: 主機(jī)上docker.sock路徑 [+] SSH Address of host (3) [none]: 192.168.10.14 第三個(gè)節(jié)點(diǎn) [+] SSH Port of host (3) [22]: 22 遠(yuǎn)程端口 [+] SSH Private Key Path of host (192.168.10.14) [none]: ~/.ssh/id_rsa 私鑰路徑 [+] SSH User of host (192.168.10.14) [ubuntu]: rancher 遠(yuǎn)程訪問(wèn)用戶(hù) [+] Is host (192.168.10.14) a Control Plane host (y/n)? [y]: n 不是控制節(jié)點(diǎn) [+] Is host (192.168.10.14) a Worker host (y/n)? [n]: n 不是worker節(jié)點(diǎn) [+] Is host (192.168.10.14) an etcd host (y/n)? [n]: y 是etcd節(jié)點(diǎn) [+] Override Hostname of host (192.168.10.14) [none]: 不覆蓋現(xiàn)有主機(jī)名 [+] Internal IP of host (192.168.10.14) [none]: 主機(jī)局域網(wǎng)IP地址 [+] Docker socket path on host (192.168.10.14) [/var/run/docker.sock]: 主機(jī)上docker.sock路徑 [+] Network Plugin Type (flannel, calico, weave, canal, aci) [canal]: 使用的網(wǎng)絡(luò)插件 [+] Authentication Strategy [x509]: 認(rèn)證策略 [+] Authorization Mode (rbac, none) [rbac]: 認(rèn)證模式 [+] Kubernetes Docker image [rancher/hyperkube:v1.21.9-rancher1]: 集群容器鏡像 [+] Cluster domain [cluster.local]: 集群域名 [+] Service Cluster IP Range [10.43.0.0/16]: 集群中Servic IP地址范圍 [+] Enable PodSecurityPolicy [n]: 是否開(kāi)啟Pod安裝策略 [+] Cluster Network CIDR [10.42.0.0/16]: 集群Pod網(wǎng)絡(luò) [+] Cluster DNS Service IP [10.43.0.10]: 集群DNS Service IP地址 [+] Add addon manifest URLs or YAML files [no]: 是否增加插件manifest URL或配置文件 [root@master01 rancher]# ls cluster.yml

在cluster.yaml文件中

kube-controller: image: "" extra_args: # 如果后面需要部署kubeflow或istio則一定要配置以下參數(shù) cluster-signing-cert-file: "/etc/kubernetes/ssl/kube-ca.pem" cluster-signing-key-file: "/etc/kubernetes/ssl/kube-ca-key.pem"

九、集群部署

# pwd /app/rancher # rke up 輸出: INFO[0000] Running RKE version: v1.3.7 INFO[0000] Initiating Kubernetes cluster INFO[0000] [dialer] Setup tunnel for host [192.168.10.14] INFO[0000] [dialer] Setup tunnel for host [192.168.10.10] INFO[0000] [dialer] Setup tunnel for host [192.168.10.12] INFO[0000] Checking if container [cluster-state-deployer] is running on host [192.168.10.14], try #1 INFO[0000] Checking if container [cluster-state-deployer] is running on host [192.168.10.10], try #1 INFO[0000] Checking if container [cluster-state-deployer] is running on host [192.168.10.12], try #1 INFO[0000] [certificates] Generating CA kubernetes certificates INFO[0000] [certificates] Generating Kubernetes API server aggregation layer requestheader client CA certificates INFO[0000] [certificates] GenerateServingCertificate is disabled, checking if there are unused kubelet certificates INFO[0000] [certificates] Generating Kubernetes API server certificates INFO[0000] [certificates] Generating Service account token key INFO[0000] [certificates] Generating Kube Controller certificates INFO[0000] [certificates] Generating Kube Scheduler certificates INFO[0000] [certificates] Generating Kube Proxy certificates INFO[0001] [certificates] Generating Node certificate INFO[0001] [certificates] Generating admin certificates and kubeconfig INFO[0001] [certificates] Generating Kubernetes API server proxy client certificates INFO[0001] [certificates] Generating kube-etcd-192-168-10-14 certificate and key INFO[0001] Successfully Deployed state file at [./cluster.rkestate] INFO[0001] Building Kubernetes cluster INFO[0001] [dialer] Setup tunnel for host [192.168.10.12] INFO[0001] [dialer] Setup tunnel for host [192.168.10.14] INFO[0001] [dialer] Setup tunnel for host [192.168.10.10] INFO[0001] [network] Deploying port listener containers INFO[0001] Image [rancher/rke-tools:v0.1.78] exists on host [192.168.10.14] INFO[0001] Starting container [rke-etcd-port-listener] on host [192.168.10.14], try #1 INFO[0001] [network] Successfully started [rke-etcd-port-listener] container on host [192.168.10.14] INFO[0001] Image [rancher/rke-tools:v0.1.78] exists on host [192.168.10.10] INFO[0001] Starting container [rke-cp-port-listener] on host [192.168.10.10], try #1 INFO[0002] [network] Successfully started [rke-cp-port-listener] container on host [192.168.10.10] INFO[0002] Image [rancher/rke-tools:v0.1.78] exists on host [192.168.10.12] INFO[0002] Starting container [rke-worker-port-listener] on host [192.168.10.12], try #1 INFO[0002] [network] Successfully started [rke-worker-port-listener] container on host [192.168.10.12] INFO[0002] [network] Port listener containers deployed successfully INFO[0002] [network] Running control plane -> etcd port checks INFO[0002] [network] Checking if host [192.168.10.10] can connect to host(s) [192.168.10.14] on port(s) [2379], try #1 INFO[0002] Image [rancher/rke-tools:v0.1.78] exists on host [192.168.10.10] INFO[0002] Starting container [rke-port-checker] on host [192.168.10.10], try #1 INFO[0002] [network] Successfully started [rke-port-checker] container on host [192.168.10.10] INFO[0002] Removing container [rke-port-checker] on host [192.168.10.10], try #1 INFO[0002] [network] Running control plane -> worker port checks INFO[0002] [network] Checking if host [192.168.10.10] can connect to host(s) [192.168.10.12] on port(s) [10250], try #1 INFO[0002] Image [rancher/rke-tools:v0.1.78] exists on host [192.168.10.10] INFO[0003] Starting container [rke-port-checker] on host [192.168.10.10], try #1 INFO[0003] [network] Successfully started [rke-port-checker] container on host [192.168.10.10] INFO[0003] Removing container [rke-port-checker] on host [192.168.10.10], try #1 INFO[0003] [network] Running workers -> control plane port checks INFO[0003] [network] Checking if host [192.168.10.12] can connect to host(s) [192.168.10.10] on port(s) [6443], try #1 INFO[0003] Image [rancher/rke-tools:v0.1.78] exists on host [192.168.10.12] INFO[0003] Starting container [rke-port-checker] on host [192.168.10.12], try #1 INFO[0003] [network] Successfully started [rke-port-checker] container on host [192.168.10.12] INFO[0003] Removing container [rke-port-checker] on host [192.168.10.12], try #1 INFO[0003] [network] Checking KubeAPI port Control Plane hosts INFO[0003] [network] Removing port listener containers INFO[0003] Removing container [rke-etcd-port-listener] on host [192.168.10.14], try #1 INFO[0003] [remove/rke-etcd-port-listener] Successfully removed container on host [192.168.10.14] INFO[0003] Removing container [rke-cp-port-listener] on host [192.168.10.10], try #1 INFO[0003] [remove/rke-cp-port-listener] Successfully removed container on host [192.168.10.10] INFO[0003] Removing container [rke-worker-port-listener] on host [192.168.10.12], try #1 INFO[0003] [remove/rke-worker-port-listener] Successfully removed container on host [192.168.10.12] INFO[0003] [network] Port listener containers removed successfully INFO[0003] [certificates] Deploying kubernetes certificates to Cluster nodes INFO[0003] Checking if container [cert-deployer] is running on host [192.168.10.14], try #1 INFO[0003] Checking if container [cert-deployer] is running on host [192.168.10.10], try #1 INFO[0003] Checking if container [cert-deployer] is running on host [192.168.10.12], try #1 INFO[0003] Image [rancher/rke-tools:v0.1.78] exists on host [192.168.10.14] INFO[0003] Image [rancher/rke-tools:v0.1.78] exists on host [192.168.10.12] INFO[0003] Image [rancher/rke-tools:v0.1.78] exists on host [192.168.10.10] INFO[0004] Starting container [cert-deployer] on host [192.168.10.14], try #1 INFO[0004] Starting container [cert-deployer] on host [192.168.10.12], try #1 INFO[0004] Starting container [cert-deployer] on host [192.168.10.10], try #1 INFO[0004] Checking if container [cert-deployer] is running on host [192.168.10.14], try #1 INFO[0004] Checking if container [cert-deployer] is running on host [192.168.10.12], try #1 INFO[0004] Checking if container [cert-deployer] is running on host [192.168.10.10], try #1 INFO[0009] Checking if container [cert-deployer] is running on host [192.168.10.14], try #1 INFO[0009] Removing container [cert-deployer] on host [192.168.10.14], try #1 INFO[0009] Checking if container [cert-deployer] is running on host [192.168.10.12], try #1 INFO[0009] Removing container [cert-deployer] on host [192.168.10.12], try #1 INFO[0009] Checking if container [cert-deployer] is running on host [192.168.10.10], try #1 INFO[0009] Removing container [cert-deployer] on host [192.168.10.10], try #1 INFO[0009] [reconcile] Rebuilding and updating local kube config INFO[0009] Successfully Deployed local admin kubeconfig at [./kube_config_cluster.yml] WARN[0009] [reconcile] host [192.168.10.10] is a control plane node without reachable Kubernetes API endpoint in the cluster WARN[0009] [reconcile] no control plane node with reachable Kubernetes API endpoint in the cluster found INFO[0009] [certificates] Successfully deployed kubernetes certificates to Cluster nodes INFO[0009] [file-deploy] Deploying file [/etc/kubernetes/audit-policy.yaml] to node [192.168.10.10] INFO[0009] Image [rancher/rke-tools:v0.1.78] exists on host [192.168.10.10] INFO[0009] Starting container [file-deployer] on host [192.168.10.10], try #1 INFO[0009] Successfully started [file-deployer] container on host [192.168.10.10] INFO[0009] Waiting for [file-deployer] container to exit on host [192.168.10.10] INFO[0009] Waiting for [file-deployer] container to exit on host [192.168.10.10] INFO[0009] Container [file-deployer] is still running on host [192.168.10.10]: stderr: [], stdout: [] INFO[0010] Waiting for [file-deployer] container to exit on host [192.168.10.10] INFO[0010] Removing container [file-deployer] on host [192.168.10.10], try #1 INFO[0010] [remove/file-deployer] Successfully removed container on host [192.168.10.10] INFO[0010] [/etc/kubernetes/audit-policy.yaml] Successfully deployed audit policy file to Cluster control nodes INFO[0010] [reconcile] Reconciling cluster state INFO[0010] [reconcile] This is newly generated cluster INFO[0010] Pre-pulling kubernetes images INFO[0010] Pulling image [rancher/hyperkube:v1.21.9-rancher1] on host [192.168.10.10], try #1 INFO[0010] Pulling image [rancher/hyperkube:v1.21.9-rancher1] on host [192.168.10.14], try #1 INFO[0010] Pulling image [rancher/hyperkube:v1.21.9-rancher1] on host [192.168.10.12], try #1 INFO[0087] Image [rancher/hyperkube:v1.21.9-rancher1] exists on host [192.168.10.10] INFO[0090] Image [rancher/hyperkube:v1.21.9-rancher1] exists on host [192.168.10.12] INFO[0092] Image [rancher/hyperkube:v1.21.9-rancher1] exists on host [192.168.10.14] INFO[0092] Kubernetes images pulled successfully INFO[0092] [etcd] Building up etcd plane.. INFO[0092] Image [rancher/rke-tools:v0.1.78] exists on host [192.168.10.14] INFO[0092] Starting container [etcd-fix-perm] on host [192.168.10.14], try #1 INFO[0092] Successfully started [etcd-fix-perm] container on host [192.168.10.14] INFO[0092] Waiting for [etcd-fix-perm] container to exit on host [192.168.10.14] INFO[0092] Waiting for [etcd-fix-perm] container to exit on host [192.168.10.14] INFO[0092] Container [etcd-fix-perm] is still running on host [192.168.10.14]: stderr: [], stdout: [] INFO[0093] Waiting for [etcd-fix-perm] container to exit on host [192.168.10.14] INFO[0093] Removing container [etcd-fix-perm] on host [192.168.10.14], try #1 INFO[0093] [remove/etcd-fix-perm] Successfully removed container on host [192.168.10.14] INFO[0093] Image [rancher/mirrored-coreos-etcd:v3.5.0] exists on host [192.168.10.14] INFO[0093] Starting container [etcd] on host [192.168.10.14], try #1 INFO[0093] [etcd] Successfully started [etcd] container on host [192.168.10.14] INFO[0093] [etcd] Running rolling snapshot container [etcd-snapshot-once] on host [192.168.10.14] INFO[0093] Image [rancher/rke-tools:v0.1.78] exists on host [192.168.10.14] INFO[0094] Starting container [etcd-rolling-snapshots] on host [192.168.10.14], try #1 INFO[0094] [etcd] Successfully started [etcd-rolling-snapshots] container on host [192.168.10.14] INFO[0099] Image [rancher/rke-tools:v0.1.78] exists on host [192.168.10.14] INFO[0099] Starting container [rke-bundle-cert] on host [192.168.10.14], try #1 INFO[0099] [certificates] Successfully started [rke-bundle-cert] container on host [192.168.10.14] INFO[0099] Waiting for [rke-bundle-cert] container to exit on host [192.168.10.14] INFO[0099] Container [rke-bundle-cert] is still running on host [192.168.10.14]: stderr: [], stdout: [] INFO[0100] Waiting for [rke-bundle-cert] container to exit on host [192.168.10.14] INFO[0100] [certificates] successfully saved certificate bundle [/opt/rke/etcd-snapshots//pki.bundle.tar.gz] on host [192.168.10.14] INFO[0100] Removing container [rke-bundle-cert] on host [192.168.10.14], try #1 INFO[0100] Image [rancher/rke-tools:v0.1.78] exists on host [192.168.10.14] INFO[0100] Starting container [rke-log-linker] on host [192.168.10.14], try #1 INFO[0100] [etcd] Successfully started [rke-log-linker] container on host [192.168.10.14] INFO[0100] Removing container [rke-log-linker] on host [192.168.10.14], try #1 INFO[0100] [remove/rke-log-linker] Successfully removed container on host [192.168.10.14] INFO[0100] Image [rancher/rke-tools:v0.1.78] exists on host [192.168.10.14] INFO[0101] Starting container [rke-log-linker] on host [192.168.10.14], try #1 INFO[0101] [etcd] Successfully started [rke-log-linker] container on host [192.168.10.14] INFO[0101] Removing container [rke-log-linker] on host [192.168.10.14], try #1 INFO[0101] [remove/rke-log-linker] Successfully removed container on host [192.168.10.14] INFO[0101] [etcd] Successfully started etcd plane.. Checking etcd cluster health INFO[0101] [etcd] etcd host [192.168.10.14] reported healthy=true INFO[0101] [controlplane] Building up Controller Plane.. INFO[0101] Checking if container [service-sidekick] is running on host [192.168.10.10], try #1 INFO[0101] Image [rancher/rke-tools:v0.1.78] exists on host [192.168.10.10] INFO[0101] Image [rancher/hyperkube:v1.21.9-rancher1] exists on host [192.168.10.10] INFO[0101] Starting container [kube-apiserver] on host [192.168.10.10], try #1 INFO[0101] [controlplane] Successfully started [kube-apiserver] container on host [192.168.10.10] INFO[0101] [healthcheck] Start Healthcheck on service [kube-apiserver] on host [192.168.10.10] INFO[0106] [healthcheck] service [kube-apiserver] on host [192.168.10.10] is healthy INFO[0106] Image [rancher/rke-tools:v0.1.78] exists on host [192.168.10.10] INFO[0107] Starting container [rke-log-linker] on host [192.168.10.10], try #1 INFO[0107] [controlplane] Successfully started [rke-log-linker] container on host [192.168.10.10] INFO[0107] Removing container [rke-log-linker] on host [192.168.10.10], try #1 INFO[0107] [remove/rke-log-linker] Successfully removed container on host [192.168.10.10] INFO[0107] Image [rancher/hyperkube:v1.21.9-rancher1] exists on host [192.168.10.10] INFO[0107] Starting container [kube-controller-manager] on host [192.168.10.10], try #1 INFO[0107] [controlplane] Successfully started [kube-controller-manager] container on host [192.168.10.10] INFO[0107] [healthcheck] Start Healthcheck on service [kube-controller-manager] on host [192.168.10.10] INFO[0112] [healthcheck] service [kube-controller-manager] on host [192.168.10.10] is healthy INFO[0112] Image [rancher/rke-tools:v0.1.78] exists on host [192.168.10.10] INFO[0113] Starting container [rke-log-linker] on host [192.168.10.10], try #1 INFO[0113] [controlplane] Successfully started [rke-log-linker] container on host [192.168.10.10] INFO[0113] Removing container [rke-log-linker] on host [192.168.10.10], try #1 INFO[0113] [remove/rke-log-linker] Successfully removed container on host [192.168.10.10] INFO[0113] Image [rancher/hyperkube:v1.21.9-rancher1] exists on host [192.168.10.10] INFO[0113] Starting container [kube-scheduler] on host [192.168.10.10], try #1 INFO[0113] [controlplane] Successfully started [kube-scheduler] container on host [192.168.10.10] INFO[0113] [healthcheck] Start Healthcheck on service [kube-scheduler] on host [192.168.10.10] INFO[0118] [healthcheck] service [kube-scheduler] on host [192.168.10.10] is healthy INFO[0118] Image [rancher/rke-tools:v0.1.78] exists on host [192.168.10.10] INFO[0119] Starting container [rke-log-linker] on host [192.168.10.10], try #1 INFO[0119] [controlplane] Successfully started [rke-log-linker] container on host [192.168.10.10] INFO[0119] Removing container [rke-log-linker] on host [192.168.10.10], try #1 INFO[0119] [remove/rke-log-linker] Successfully removed container on host [192.168.10.10] INFO[0119] [controlplane] Successfully started Controller Plane.. INFO[0119] [authz] Creating rke-job-deployer ServiceAccount INFO[0119] [authz] rke-job-deployer ServiceAccount created successfully INFO[0119] [authz] Creating system:node ClusterRoleBinding INFO[0119] [authz] system:node ClusterRoleBinding created successfully INFO[0119] [authz] Creating kube-apiserver proxy ClusterRole and ClusterRoleBinding INFO[0119] [authz] kube-apiserver proxy ClusterRole and ClusterRoleBinding created successfully INFO[0119] Successfully Deployed state file at [./cluster.rkestate] INFO[0119] [state] Saving full cluster state to Kubernetes INFO[0119] [state] Successfully Saved full cluster state to Kubernetes ConfigMap: full-cluster-state INFO[0119] [worker] Building up Worker Plane.. INFO[0119] Checking if container [service-sidekick] is running on host [192.168.10.10], try #1 INFO[0119] Image [rancher/rke-tools:v0.1.78] exists on host [192.168.10.12] INFO[0119] Image [rancher/rke-tools:v0.1.78] exists on host [192.168.10.14] INFO[0119] [sidekick] Sidekick container already created on host [192.168.10.10] INFO[0119] Image [rancher/hyperkube:v1.21.9-rancher1] exists on host [192.168.10.10] INFO[0119] Starting container [kubelet] on host [192.168.10.10], try #1 INFO[0119] [worker] Successfully started [kubelet] container on host [192.168.10.10] INFO[0119] [healthcheck] Start Healthcheck on service [kubelet] on host [192.168.10.10] INFO[0119] Starting container [nginx-proxy] on host [192.168.10.14], try #1 INFO[0119] [worker] Successfully started [nginx-proxy] container on host [192.168.10.14] INFO[0119] Image [rancher/rke-tools:v0.1.78] exists on host [192.168.10.14] INFO[0119] Starting container [nginx-proxy] on host [192.168.10.12], try #1 INFO[0119] [worker] Successfully started [nginx-proxy] container on host [192.168.10.12] INFO[0119] Image [rancher/rke-tools:v0.1.78] exists on host [192.168.10.12] INFO[0119] Starting container [rke-log-linker] on host [192.168.10.14], try #1 INFO[0120] Starting container [rke-log-linker] on host [192.168.10.12], try #1 INFO[0120] [worker] Successfully started [rke-log-linker] container on host [192.168.10.14] INFO[0120] Removing container [rke-log-linker] on host [192.168.10.14], try #1 INFO[0120] [remove/rke-log-linker] Successfully removed container on host [192.168.10.14] INFO[0120] Checking if container [service-sidekick] is running on host [192.168.10.14], try #1 INFO[0120] Image [rancher/rke-tools:v0.1.78] exists on host [192.168.10.14] INFO[0120] [worker] Successfully started [rke-log-linker] container on host [192.168.10.12] INFO[0120] Removing container [rke-log-linker] on host [192.168.10.12], try #1 INFO[0120] Image [rancher/hyperkube:v1.21.9-rancher1] exists on host [192.168.10.14] INFO[0120] [remove/rke-log-linker] Successfully removed container on host [192.168.10.12] INFO[0120] Checking if container [service-sidekick] is running on host [192.168.10.12], try #1 INFO[0120] Image [rancher/rke-tools:v0.1.78] exists on host [192.168.10.12] INFO[0120] Starting container [kubelet] on host [192.168.10.14], try #1 INFO[0120] [worker] Successfully started [kubelet] container on host [192.168.10.14] INFO[0120] [healthcheck] Start Healthcheck on service [kubelet] on host [192.168.10.14] INFO[0120] Image [rancher/hyperkube:v1.21.9-rancher1] exists on host [192.168.10.12] INFO[0120] Starting container [kubelet] on host [192.168.10.12], try #1 INFO[0120] [worker] Successfully started [kubelet] container on host [192.168.10.12] INFO[0120] [healthcheck] Start Healthcheck on service [kubelet] on host [192.168.10.12] INFO[0124] [healthcheck] service [kubelet] on host [192.168.10.10] is healthy INFO[0124] Image [rancher/rke-tools:v0.1.78] exists on host [192.168.10.10] INFO[0124] Starting container [rke-log-linker] on host [192.168.10.10], try #1 INFO[0125] [worker] Successfully started [rke-log-linker] container on host [192.168.10.10] INFO[0125] Removing container [rke-log-linker] on host [192.168.10.10], try #1 INFO[0125] [remove/rke-log-linker] Successfully removed container on host [192.168.10.10] INFO[0125] Image [rancher/hyperkube:v1.21.9-rancher1] exists on host [192.168.10.10] INFO[0125] Starting container [kube-proxy] on host [192.168.10.10], try #1 INFO[0125] [worker] Successfully started [kube-proxy] container on host [192.168.10.10] INFO[0125] [healthcheck] Start Healthcheck on service [kube-proxy] on host [192.168.10.10] INFO[0125] [healthcheck] service [kubelet] on host [192.168.10.14] is healthy INFO[0125] Image [rancher/rke-tools:v0.1.78] exists on host [192.168.10.14] INFO[0125] Starting container [rke-log-linker] on host [192.168.10.14], try #1 INFO[0125] [healthcheck] service [kubelet] on host [192.168.10.12] is healthy INFO[0125] Image [rancher/rke-tools:v0.1.78] exists on host [192.168.10.12] INFO[0125] [worker] Successfully started [rke-log-linker] container on host [192.168.10.14] INFO[0125] Starting container [rke-log-linker] on host [192.168.10.12], try #1 INFO[0125] Removing container [rke-log-linker] on host [192.168.10.14], try #1 INFO[0126] [remove/rke-log-linker] Successfully removed container on host [192.168.10.14] INFO[0126] Image [rancher/hyperkube:v1.21.9-rancher1] exists on host [192.168.10.14] INFO[0126] Starting container [kube-proxy] on host [192.168.10.14], try #1 INFO[0126] [worker] Successfully started [rke-log-linker] container on host [192.168.10.12] INFO[0126] Removing container [rke-log-linker] on host [192.168.10.12], try #1 INFO[0126] [worker] Successfully started [kube-proxy] container on host [192.168.10.14] INFO[0126] [healthcheck] Start Healthcheck on service [kube-proxy] on host [192.168.10.14] INFO[0126] [remove/rke-log-linker] Successfully removed container on host [192.168.10.12] INFO[0126] Image [rancher/hyperkube:v1.21.9-rancher1] exists on host [192.168.10.12] INFO[0126] Starting container [kube-proxy] on host [192.168.10.12], try #1 INFO[0126] [worker] Successfully started [kube-proxy] container on host [192.168.10.12] INFO[0126] [healthcheck] Start Healthcheck on service [kube-proxy] on host [192.168.10.12] INFO[0130] [healthcheck] service [kube-proxy] on host [192.168.10.10] is healthy INFO[0130] Image [rancher/rke-tools:v0.1.78] exists on host [192.168.10.10] INFO[0130] Starting container [rke-log-linker] on host [192.168.10.10], try #1 INFO[0130] [worker] Successfully started [rke-log-linker] container on host [192.168.10.10] INFO[0130] Removing container [rke-log-linker] on host [192.168.10.10], try #1 INFO[0130] [remove/rke-log-linker] Successfully removed container on host [192.168.10.10] INFO[0131] [healthcheck] service [kube-proxy] on host [192.168.10.14] is healthy INFO[0131] Image [rancher/rke-tools:v0.1.78] exists on host [192.168.10.14] INFO[0131] Starting container [rke-log-linker] on host [192.168.10.14], try #1 INFO[0131] [healthcheck] service [kube-proxy] on host [192.168.10.12] is healthy INFO[0131] Image [rancher/rke-tools:v0.1.78] exists on host [192.168.10.12] INFO[0131] [worker] Successfully started [rke-log-linker] container on host [192.168.10.14] INFO[0131] Removing container [rke-log-linker] on host [192.168.10.14], try #1 INFO[0131] Starting container [rke-log-linker] on host [192.168.10.12], try #1 INFO[0131] [remove/rke-log-linker] Successfully removed container on host [192.168.10.14] INFO[0131] [worker] Successfully started [rke-log-linker] container on host [192.168.10.12] INFO[0131] Removing container [rke-log-linker] on host [192.168.10.12], try #1 INFO[0131] [remove/rke-log-linker] Successfully removed container on host [192.168.10.12] INFO[0131] [worker] Successfully started Worker Plane.. INFO[0131] Image [rancher/rke-tools:v0.1.78] exists on host [192.168.10.12] INFO[0131] Image [rancher/rke-tools:v0.1.78] exists on host [192.168.10.14] INFO[0131] Image [rancher/rke-tools:v0.1.78] exists on host [192.168.10.10] INFO[0132] Starting container [rke-log-cleaner] on host [192.168.10.14], try #1 INFO[0132] Starting container [rke-log-cleaner] on host [192.168.10.12], try #1 INFO[0132] Starting container [rke-log-cleaner] on host [192.168.10.10], try #1 INFO[0132] [cleanup] Successfully started [rke-log-cleaner] container on host [192.168.10.14] INFO[0132] Removing container [rke-log-cleaner] on host [192.168.10.14], try #1 INFO[0132] [cleanup] Successfully started [rke-log-cleaner] container on host [192.168.10.12] INFO[0132] Removing container [rke-log-cleaner] on host [192.168.10.12], try #1 INFO[0132] [cleanup] Successfully started [rke-log-cleaner] container on host [192.168.10.10] INFO[0132] Removing container [rke-log-cleaner] on host [192.168.10.10], try #1 INFO[0132] [remove/rke-log-cleaner] Successfully removed container on host [192.168.10.14] INFO[0132] [remove/rke-log-cleaner] Successfully removed container on host [192.168.10.12] INFO[0132] [remove/rke-log-cleaner] Successfully removed container on host [192.168.10.10] INFO[0132] [sync] Syncing nodes Labels and Taints INFO[0132] [sync] Successfully synced nodes Labels and Taints INFO[0132] [network] Setting up network plugin: canal INFO[0132] [addons] Saving ConfigMap for addon rke-network-plugin to Kubernetes INFO[0132] [addons] Successfully saved ConfigMap for addon rke-network-plugin to Kubernetes INFO[0132] [addons] Executing deploy job rke-network-plugin INFO[0137] [addons] Setting up coredns INFO[0137] [addons] Saving ConfigMap for addon rke-coredns-addon to Kubernetes INFO[0137] [addons] Successfully saved ConfigMap for addon rke-coredns-addon to Kubernetes INFO[0137] [addons] Executing deploy job rke-coredns-addon INFO[0142] [addons] CoreDNS deployed successfully INFO[0142] [dns] DNS provider coredns deployed successfully INFO[0142] [addons] Setting up Metrics Server INFO[0142] [addons] Saving ConfigMap for addon rke-metrics-addon to Kubernetes INFO[0142] [addons] Successfully saved ConfigMap for addon rke-metrics-addon to Kubernetes INFO[0142] [addons] Executing deploy job rke-metrics-addon INFO[0147] [addons] Metrics Server deployed successfully INFO[0147] [ingress] Setting up nginx ingress controller INFO[0147] [ingress] removing admission batch jobs if they exist INFO[0147] [addons] Saving ConfigMap for addon rke-ingress-controller to Kubernetes INFO[0147] [addons] Successfully saved ConfigMap for addon rke-ingress-controller to Kubernetes INFO[0147] [addons] Executing deploy job rke-ingress-controller INFO[0152] [ingress] removing default backend service and deployment if they exist INFO[0152] [ingress] ingress controller nginx deployed successfully INFO[0152] [addons] Setting up user addons INFO[0152] [addons] no user addons defined INFO[0152] Finished building Kubernetes cluster successfully

十、安裝kubectl客戶(hù)端

在master01主機(jī)上操作。

10.1 kubectl客戶(hù)端安裝

# wget https://storage.googleapis.com/kubernetes-release/release/v1.21.9/bin/linux/amd64/kubectl # chmod +x kubectl # mv kubectl /usr/local/bin/kubectl # kubectl version --client Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.9", GitCommit:"f59f5c2fda36e4036b49ec027e556a15456108f0", GitTreeState:"clean", BuildDate:"2022-01-19T17:33:06Z", GoVersion:"go1.16.12", Compiler:"gc", Platform:"linux/amd64"}

10.2 kubectl客戶(hù)端配置集群管理文件及應(yīng)用驗(yàn)證

[root@master01 ~]# ls /app/rancher/ cluster.rkestate cluster.yml kube_config_cluster.yml [root@master01 ~]# mkdir ./.kube [root@master01 ~]# cp /app/rancher/kube_config_cluster.yml /root/.kube/config [root@master01 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION 192.168.10.10 Ready controlplane 9m13s v1.21.9 192.168.10.12 Ready worker 9m12s v1.21.9 192.168.10.14 Ready etcd 9m12s v1.21.9 [root@master01 ~]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-5685fbd9f7-gcwj7 1/1 Running 0 9m36s canal-fz2bg 2/2 Running 0 9m36s canal-qzw4n 2/2 Running 0 9m36s canal-sstjn 2/2 Running 0 9m36s coredns-8578b6dbdd-ftnf6 1/1 Running 0 9m30s coredns-autoscaler-f7b68ccb7-fzdgc 1/1 Running 0 9m30s metrics-server-6bc7854fb5-kwppz 1/1 Running 0 9m25s rke-coredns-addon-deploy-job--1-x56w2 0/1 Completed 0 9m31s rke-ingress-controller-deploy-job--1-wzp2b 0/1 Completed 0 9m21s rke-metrics-addon-deploy-job--1-ltlgn 0/1 Completed 0 9m26s rke-network-plugin-deploy-job--1-nsbfn 0/1 Completed 0 9m41s

十一、集群web管理 rancher

在master01運(yùn)行

rancher控制面板主要方便用于控制k8s集群,查看集群狀態(tài),編輯集群等。

11.1 使用docker run啟動(dòng)一個(gè)rancher

[root@master01 ~]# docker run -d --restart=unless-stopped --privileged --name rancher -p 80:80 -p 443:443 rancher/rancher:v2.5.9 [root@master01 ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 0fd46ee77655 rancher/rancher:v2.5.9 "entrypoint.sh" 5 seconds ago Up 3 seconds 0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp rancher

11.2 訪問(wèn)rancher

[root@master01 ~]# ss -anput | grep ":80" tcp LISTEN 0 128 *:80 *:* users:(("docker-proxy",pid=29564,fd=4)) tcp LISTEN 0 128 [::]:80 [::]:* users:(("docker-proxy",pid=29570,fd=4))


11.3 在rancher web界面添加kubernetes集群

使用第一條報(bào)錯(cuò): [root@master01 ~]# kubectl apply -f https://192.168.10.10/v3/import/vljtg5srnznpzts662q6ncs4jm6f8kd847xqs97d6fbs5rhn7kfzvk_c-ktwhn.yaml Unable to connect to the server: x509: certificate is valid for 127.0.0.1, 172.17.0.2, not 192.168.10.10 使用第二條:第一次報(bào)錯(cuò): [root@master01 ~]# curl --insecure -sfL https://192.168.10.10/v3/import/vljtg5srnznpzts662q6ncs4jm6f8kd847xqs97d6fbs5rhn7kfzvk_c-ktwhn.yaml | kubectl apply -f - error: no objects passed to apply第二次成功:[root@master01 ~]# curl --insecure -sfL https://192.168.10.10/v3/import/vljtg5srnznpzts662q6ncs4jm6f8kd847xqs97d6fbs5rhn7kfzvk_c-ktwhn.yaml | kubectl apply -f - Warning: resource clusterroles/proxy-clusterrole-kubeapiserver is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. clusterrole.rbac.authorization.k8s.io/proxy-clusterrole-kubeapiserver configured Warning: resource clusterrolebindings/proxy-role-binding-kubernetes-master is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. clusterrolebinding.rbac.authorization.k8s.io/proxy-role-binding-kubernetes-master configured namespace/cattle-system created serviceaccount/cattle created clusterrolebinding.rbac.authorization.k8s.io/cattle-admin-binding created secret/cattle-credentials-0619853 created clusterrole.rbac.authorization.k8s.io/cattle-admin created Warning: spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[0].key: beta.kubernetes.io/os is deprecated since v1.14; use "kubernetes.io/os" instead deployment.apps/cattle-cluster-agent created


十二、集群節(jié)點(diǎn)更新

12.1 增加worker節(jié)點(diǎn)

RKE 支持為 worker 和 controlplane 主機(jī)添加或刪除節(jié)點(diǎn)。

可以通過(guò)修改cluster.yml文件的內(nèi)容,添加額外的節(jié)點(diǎn),并指定它們?cè)?Kubernetes 集群中的角色;或從cluster.yml中的節(jié)點(diǎn)列表中刪除節(jié)點(diǎn)信息,以達(dá)到刪除節(jié)點(diǎn)的目的。

通過(guò)運(yùn)行rke up --update-only,您可以運(yùn)行rke up --update-only命令,只添加或刪除工作節(jié)點(diǎn)。這將會(huì)忽略除了cluster.yml中的工作節(jié)點(diǎn)以外的其他內(nèi)容。

使用–update-only添加或刪除 worker 節(jié)點(diǎn)時(shí),可能會(huì)觸發(fā)插件或其他組件的重新部署或更新。

添加一臺(tái)節(jié)點(diǎn)環(huán)境也需要一致。安裝docker, 創(chuàng)建用戶(hù),關(guān)閉swap等

12.1.1 集群主機(jī)名稱(chēng)配置

# hostnamectl set-hostname xxx

12.1.2 集群主機(jī)IP地址配置

# vim /etc/sysconfig/network-scripts/ifcfg-ens33 # cat /etc/sysconfig/network-scripts/ifcfg-ens33 TYPE="Ethernet" PROXY_METHOD="none" BROWSER_ONLY="no" BOOTPROTO="none" 修改為靜態(tài) DEFROUTE="yes" IPV4_FAILURE_FATAL="no" IPV6INIT="yes" IPV6_AUTOCONF="yes" IPV6_DEFROUTE="yes" IPV6_FAILURE_FATAL="no" IPV6_ADDR_GEN_MODE="stable-privacy" NAME="eth0" DEVICE="eth0" ONBOOT="yes" 添加如下內(nèi)容: IPADDR="192.168.10.XXX" PREFIX="24" GATEWAY="192.168.10.2" DNS1="119.29.29.29"

12.1.3 主機(jī)名與IP地址解析

# vim /etc/hosts # cat /etc/hosts ...... 192.168.10.10 master01 192.168.10.11 master02 192.168.10.12 worker01 192.168.10.13 worker02 192.168.10.14 etcd01

12.1.4 配置ip_forward及過(guò)濾機(jī)制

# vim /etc/sysctl.conf # cat /etc/sysctl.conf net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 # modprobe br_netfilter # sysctl -p /etc/sysctl.conf

12.1.5 主機(jī)安全設(shè)置

12.1.5.1 防火墻

# systemctl stop firewalld # systemctl disable firewalld # firewall-cmd --state

12.1.5.2 selinux

修改完成后一定要重啟操作系統(tǒng)

永久關(guān)閉,一定要重啟操作系統(tǒng)后生效。 sed -ri 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config 臨時(shí)關(guān)閉,不重啟操作系統(tǒng),即刻生效。 # setenforce 0

12.1.6 主機(jī)swap分區(qū)設(shè)置

永久關(guān)閉,需要重啟操作系統(tǒng)生效。 # sed -ri 's/.*swap.*/#&/' /etc/fstab # cat /etc/fstab...... #/dev/mapper/centos_192-swap swap swap defaults 0 0 臨時(shí)關(guān)閉,不需要重啟操作系統(tǒng),即刻生效。 # swapoff -a

12.1.7 時(shí)間同步

# yum -y insall ntpdate # crontab -e 0 */1 * * * ntpdate time1.aliyun.com

12.1.8 Docker部署

12.1.8.1 配置Docker YUM源

# wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

12.1.8.2 安裝Docker CE

# yum -y install docker-ce

12.1.8.3 啟動(dòng)Docker服務(wù)

# systemctl enable docker # systemctl start docker

12.1.8.4 配置Docker容器鏡像加速器

# vim /etc/docker/daemon.json # cat /etc/docker/daemon.json {"registry-mirrors": ["https://s27w6kze.mirror.aliyuncs.com"] }

12.1.9 docker compose安裝

# curl -L "https://github.com/docker/compose/releases/download/1.28.5/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose # chmod +x /usr/local/bin/docker-compose # ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose # docker-compose --version

12.1.10 添加rancher用戶(hù)

使用CentOS時(shí),不能使用 root 賬號(hào),因此要添加專(zhuān)用的賬號(hào)進(jìn)行 docker相關(guān) 操作。重啟系統(tǒng)以后才能生效,只重啟Docker服務(wù)是不行的!重啟后,rancher用戶(hù)也可以直接使用docker ps命令

# useradd rancher # usermod -aG docker rancher # echo 123 | passwd --stdin rancher

12.1.11 復(fù)制ssh證書(shū)

從rke二進(jìn)制文件安裝主機(jī)上復(fù)制,如果已經(jīng)復(fù)制,則可不需要重復(fù)性復(fù)制。

12.1.11.1 復(fù)制證書(shū)

# ssh-copy-id rancher@worker02

12.1.11.2 驗(yàn)證ssh證書(shū)是否可用

在rke二進(jìn)制文件安裝主機(jī)機(jī)測(cè)試連接其它集群主機(jī),驗(yàn)證是否可使用docker ps命令即可。

# ssh rancher@worker02 遠(yuǎn)程主機(jī)# docker ps

12.1.12 編輯cluster.yml文件

在文件中添加worker節(jié)點(diǎn)信息

# vim cluster.yml ...... - address: 192.168.10.13port: "22"internal_address: ""role:- workerhostname_override: user: "rancher"docker_socket: /var/run/docker.sockssh_key: ""ssh_key_path: ~/.ssh/id_rsassh_cert: ""ssh_cert_path: ""labels: {}taints: [] ...... # rke up --update-only 輸出 INFO[0000] Running RKE version: v1.3.7 INFO[0000] Initiating Kubernetes cluster INFO[0000] [certificates] GenerateServingCertificate is disabled, checking if there are unused kubelet certificates INFO[0000] [certificates] Generating admin certificates and kubeconfig INFO[0000] Successfully Deployed state file at [./cluster.rkestate] INFO[0000] Building Kubernetes cluster INFO[0000] [dialer] Setup tunnel for host [192.168.10.13] INFO[0000] [dialer] Setup tunnel for host [192.168.10.10] INFO[0000] [dialer] Setup tunnel for host [192.168.10.14] INFO[0000] [dialer] Setup tunnel for host [192.168.10.12] INFO[0000] [network] Deploying port listener containers INFO[0000] Image [rancher/rke-tools:v0.1.78] exists on host [192.168.10.14] INFO[0000] Starting container [rke-etcd-port-listener] on host [192.168.10.14], try #1 INFO[0000] Image [rancher/rke-tools:v0.1.78] exists on host [192.168.10.10] INFO[0000] Starting container [rke-cp-port-listener] on host [192.168.10.10], try #1 INFO[0000] Pulling image [rancher/rke-tools:v0.1.78] on host [192.168.10.13], try #1 INFO[0000] Image [rancher/rke-tools:v0.1.78] exists on host [192.168.10.12] INFO[0001] Starting container [rke-worker-port-listener] on host [192.168.10.12], try #1 INFO[0031] Image [rancher/rke-tools:v0.1.78] exists on host [192.168.10.13] INFO[0032] Starting container [rke-worker-port-listener] on host [192.168.10.13], try #1 INFO[0033] [network] Successfully started [rke-worker-port-listener] container on host [192.168.10.13] INFO[0033] [network] Port listener containers deployed successfully INFO[0033] [network] Running control plane -> etcd port checks INFO[0033] [network] Checking if host [192.168.10.10] can connect to host(s) [192.168.10.14] on port(s) [2379], try #1 INFO[0033] Image [rancher/rke-tools:v0.1.78] exists on host [192.168.10.10] INFO[0033] Starting container [rke-port-checker] on host [192.168.10.10], try #1 INFO[0033] [network] Successfully started [rke-port-checker] container on host [192.168.10.10] INFO[0033] Removing container [rke-port-checker] on host [192.168.10.10], try #1 INFO[0033] [network] Running control plane -> worker port checks INFO[0033] [network] Checking if host [192.168.10.10] can connect to host(s) [192.168.10.12 192.168.10.13] on port(s) [10250], try #1 INFO[0033] Image [rancher/rke-tools:v0.1.78] exists on host [192.168.10.10] INFO[0033] Starting container [rke-port-checker] on host [192.168.10.10], try #1 INFO[0033] [network] Successfully started [rke-port-checker] container on host [192.168.10.10] INFO[0033] Removing container [rke-port-checker] on host [192.168.10.10], try #1 INFO[0033] [network] Running workers -> control plane port checks INFO[0033] [network] Checking if host [192.168.10.12] can connect to host(s) [192.168.10.10] on port(s) [6443], try #1 INFO[0033] [network] Checking if host [192.168.10.13] can connect to host(s) [192.168.10.10] on port(s) [6443], try #1 INFO[0033] Image [rancher/rke-tools:v0.1.78] exists on host [192.168.10.13] INFO[0033] Image [rancher/rke-tools:v0.1.78] exists on host [192.168.10.12] INFO[0034] Starting container [rke-port-checker] on host [192.168.10.13], try #1 INFO[0034] Starting container [rke-port-checker] on host [192.168.10.12], try #1 INFO[0034] [network] Successfully started [rke-port-checker] container on host [192.168.10.12] INFO[0034] Removing container [rke-port-checker] on host [192.168.10.12], try #1 INFO[0034] [network] Successfully started [rke-port-checker] container on host [192.168.10.13] INFO[0034] Removing container [rke-port-checker] on host [192.168.10.13], try #1 INFO[0034] [network] Checking KubeAPI port Control Plane hosts INFO[0034] [network] Removing port listener containers INFO[0034] Removing container [rke-etcd-port-listener] on host [192.168.10.14], try #1 INFO[0034] [remove/rke-etcd-port-listener] Successfully removed container on host [192.168.10.14] INFO[0034] Removing container [rke-cp-port-listener] on host [192.168.10.10], try #1 INFO[0034] [remove/rke-cp-port-listener] Successfully removed container on host [192.168.10.10] INFO[0034] Removing container [rke-worker-port-listener] on host [192.168.10.12], try #1 INFO[0034] Removing container [rke-worker-port-listener] on host [192.168.10.13], try #1 INFO[0034] [remove/rke-worker-port-listener] Successfully removed container on host [192.168.10.12] INFO[0034] [remove/rke-worker-port-listener] Successfully removed container on host [192.168.10.13] INFO[0034] [network] Port listener containers removed successfully INFO[0034] [certificates] Deploying kubernetes certificates to Cluster nodes INFO[0034] Checking if container [cert-deployer] is running on host [192.168.10.13], try #1 INFO[0034] Checking if container [cert-deployer] is running on host [192.168.10.14], try #1 INFO[0034] Checking if container [cert-deployer] is running on host [192.168.10.12], try #1 INFO[0034] Checking if container [cert-deployer] is running on host [192.168.10.10], try #1 INFO[0034] Image [rancher/rke-tools:v0.1.78] exists on host [192.168.10.13] INFO[0034] Image [rancher/rke-tools:v0.1.78] exists on host [192.168.10.12] INFO[0034] Image [rancher/rke-tools:v0.1.78] exists on host [192.168.10.14] INFO[0034] Image [rancher/rke-tools:v0.1.78] exists on host [192.168.10.10] INFO[0034] Starting container [cert-deployer] on host [192.168.10.13], try #1 INFO[0034] Starting container [cert-deployer] on host [192.168.10.14], try #1 INFO[0034] Starting container [cert-deployer] on host [192.168.10.12], try #1 INFO[0034] Starting container [cert-deployer] on host [192.168.10.10], try #1 INFO[0034] Checking if container [cert-deployer] is running on host [192.168.10.13], try #1 INFO[0035] Checking if container [cert-deployer] is running on host [192.168.10.12], try #1 INFO[0035] Checking if container [cert-deployer] is running on host [192.168.10.14], try #1 INFO[0035] Checking if container [cert-deployer] is running on host [192.168.10.10], try #1 INFO[0039] Checking if container [cert-deployer] is running on host [192.168.10.13], try #1 INFO[0039] Removing container [cert-deployer] on host [192.168.10.13], try #1 INFO[0040] Checking if container [cert-deployer] is running on host [192.168.10.12], try #1 INFO[0040] Checking if container [cert-deployer] is running on host [192.168.10.14], try #1 INFO[0040] Removing container [cert-deployer] on host [192.168.10.12], try #1 INFO[0040] Removing container [cert-deployer] on host [192.168.10.14], try #1 INFO[0040] Checking if container [cert-deployer] is running on host [192.168.10.10], try #1 INFO[0040] Removing container [cert-deployer] on host [192.168.10.10], try #1 INFO[0040] [reconcile] Rebuilding and updating local kube config INFO[0040] Successfully Deployed local admin kubeconfig at [./kube_config_cluster.yml] INFO[0040] [reconcile] host [192.168.10.10] is a control plane node with reachable Kubernetes API endpoint in the cluster INFO[0040] [certificates] Successfully deployed kubernetes certificates to Cluster nodes INFO[0040] [file-deploy] Deploying file [/etc/kubernetes/audit-policy.yaml] to node [192.168.10.10] INFO[0040] Image [rancher/rke-tools:v0.1.78] exists on host [192.168.10.10] INFO[0040] Starting container [file-deployer] on host [192.168.10.10], try #1 INFO[0040] Successfully started [file-deployer] container on host [192.168.10.10] INFO[0040] Waiting for [file-deployer] container to exit on host [192.168.10.10] INFO[0040] Waiting for [file-deployer] container to exit on host [192.168.10.10] INFO[0040] Container [file-deployer] is still running on host [192.168.10.10]: stderr: [], stdout: [] INFO[0041] Waiting for [file-deployer] container to exit on host [192.168.10.10] INFO[0041] Removing container [file-deployer] on host [192.168.10.10], try #1 INFO[0041] [remove/file-deployer] Successfully removed container on host [192.168.10.10] INFO[0041] [/etc/kubernetes/audit-policy.yaml] Successfully deployed audit policy file to Cluster control nodes INFO[0041] [reconcile] Reconciling cluster state INFO[0041] [reconcile] Check etcd hosts to be deleted INFO[0041] [reconcile] Check etcd hosts to be added INFO[0041] [reconcile] Rebuilding and updating local kube config INFO[0041] Successfully Deployed local admin kubeconfig at [./kube_config_cluster.yml] INFO[0041] [reconcile] host [192.168.10.10] is a control plane node with reachable Kubernetes API endpoint in the cluster INFO[0041] [reconcile] Reconciled cluster state successfully INFO[0041] max_unavailable_worker got rounded down to 0, resetting to 1 INFO[0041] Setting maxUnavailable for worker nodes to: 1 INFO[0041] Setting maxUnavailable for controlplane nodes to: 1 INFO[0041] Pre-pulling kubernetes images INFO[0041] Pulling image [rancher/hyperkube:v1.21.9-rancher1] on host [192.168.10.13], try #1 INFO[0041] Image [rancher/hyperkube:v1.21.9-rancher1] exists on host [192.168.10.12] INFO[0041] Image [rancher/hyperkube:v1.21.9-rancher1] exists on host [192.168.10.14] INFO[0041] Image [rancher/hyperkube:v1.21.9-rancher1] exists on host [192.168.10.10] INFO[0130] Image [rancher/hyperkube:v1.21.9-rancher1] exists on host [192.168.10.13] INFO[0130] Kubernetes images pulled successfully INFO[0130] [etcd] Building up etcd plane.. INFO[0130] [etcd] Successfully started etcd plane.. Checking etcd cluster health INFO[0130] [etcd] etcd host [192.168.10.14] reported healthy=true INFO[0130] [controlplane] Now checking status of node 192.168.10.10, try #1 INFO[0130] [authz] Creating rke-job-deployer ServiceAccount INFO[0130] [authz] rke-job-deployer ServiceAccount created successfully INFO[0130] [authz] Creating system:node ClusterRoleBinding INFO[0130] [authz] system:node ClusterRoleBinding created successfully INFO[0130] [authz] Creating kube-apiserver proxy ClusterRole and ClusterRoleBinding INFO[0130] [authz] kube-apiserver proxy ClusterRole and ClusterRoleBinding created successfully INFO[0130] Successfully Deployed state file at [./cluster.rkestate] INFO[0130] [state] Saving full cluster state to Kubernetes INFO[0130] [state] Successfully Saved full cluster state to Kubernetes ConfigMap: full-cluster-state INFO[0130] [worker] Now checking status of node 192.168.10.14, try #1 INFO[0130] [worker] Now checking status of node 192.168.10.12, try #1 INFO[0130] [worker] Upgrading Worker Plane.. INFO[0155] First checking and processing worker components for upgrades on nodes with etcd role one at a time INFO[0155] [workerplane] Processing host 192.168.10.14 INFO[0155] [worker] Now checking status of node 192.168.10.14, try #1 INFO[0155] [worker] Getting list of nodes for upgrade INFO[0155] [workerplane] Upgrade not required for worker components of host 192.168.10.14 INFO[0155] Now checking and upgrading worker components on nodes with only worker role 1 at a time INFO[0155] [workerplane] Processing host 192.168.10.12 INFO[0155] [worker] Now checking status of node 192.168.10.12, try #1 INFO[0155] [worker] Getting list of nodes for upgrade INFO[0155] [workerplane] Upgrade not required for worker components of host 192.168.10.12 INFO[0155] [workerplane] Processing host 192.168.10.13 INFO[0155] Image [rancher/rke-tools:v0.1.78] exists on host [192.168.10.13] INFO[0156] Starting container [nginx-proxy] on host [192.168.10.13], try #1 INFO[0156] [worker] Successfully started [nginx-proxy] container on host [192.168.10.13] INFO[0156] Image [rancher/rke-tools:v0.1.78] exists on host [192.168.10.13] INFO[0156] Starting container [rke-log-linker] on host [192.168.10.13], try #1 INFO[0156] [worker] Successfully started [rke-log-linker] container on host [192.168.10.13] INFO[0156] Removing container [rke-log-linker] on host [192.168.10.13], try #1 INFO[0156] [remove/rke-log-linker] Successfully removed container on host [192.168.10.13] INFO[0156] Checking if container [service-sidekick] is running on host [192.168.10.13], try #1 INFO[0156] Image [rancher/rke-tools:v0.1.78] exists on host [192.168.10.13] INFO[0156] Image [rancher/hyperkube:v1.21.9-rancher1] exists on host [192.168.10.13] INFO[0156] Starting container [kubelet] on host [192.168.10.13], try #1 INFO[0156] [worker] Successfully started [kubelet] container on host [192.168.10.13] INFO[0156] [healthcheck] Start Healthcheck on service [kubelet] on host [192.168.10.13] INFO[0162] [healthcheck] service [kubelet] on host [192.168.10.13] is healthy INFO[0162] Image [rancher/rke-tools:v0.1.78] exists on host [192.168.10.13] INFO[0162] Starting container [rke-log-linker] on host [192.168.10.13], try #1 INFO[0162] [worker] Successfully started [rke-log-linker] container on host [192.168.10.13] INFO[0162] Removing container [rke-log-linker] on host [192.168.10.13], try #1 INFO[0162] [remove/rke-log-linker] Successfully removed container on host [192.168.10.13] INFO[0162] Image [rancher/hyperkube:v1.21.9-rancher1] exists on host [192.168.10.13] INFO[0162] Starting container [kube-proxy] on host [192.168.10.13], try #1 INFO[0162] [worker] Successfully started [kube-proxy] container on host [192.168.10.13] INFO[0162] [healthcheck] Start Healthcheck on service [kube-proxy] on host [192.168.10.13] INFO[0167] [healthcheck] service [kube-proxy] on host [192.168.10.13] is healthy INFO[0167] Image [rancher/rke-tools:v0.1.78] exists on host [192.168.10.13] INFO[0168] Starting container [rke-log-linker] on host [192.168.10.13], try #1 INFO[0168] [worker] Successfully started [rke-log-linker] container on host [192.168.10.13] INFO[0168] Removing container [rke-log-linker] on host [192.168.10.13], try #1 INFO[0168] [remove/rke-log-linker] Successfully removed container on host [192.168.10.13] INFO[0168] [worker] Successfully upgraded Worker Plane.. INFO[0168] Image [rancher/rke-tools:v0.1.78] exists on host [192.168.10.14] INFO[0168] Image [rancher/rke-tools:v0.1.78] exists on host [192.168.10.13] INFO[0168] Image [rancher/rke-tools:v0.1.78] exists on host [192.168.10.12] INFO[0168] Image [rancher/rke-tools:v0.1.78] exists on host [192.168.10.10] INFO[0168] Starting container [rke-log-cleaner] on host [192.168.10.13], try #1 INFO[0168] Starting container [rke-log-cleaner] on host [192.168.10.14], try #1 INFO[0168] Starting container [rke-log-cleaner] on host [192.168.10.12], try #1 INFO[0168] [cleanup] Successfully started [rke-log-cleaner] container on host [192.168.10.13] INFO[0168] Removing container [rke-log-cleaner] on host [192.168.10.13], try #1 INFO[0168] [cleanup] Successfully started [rke-log-cleaner] container on host [192.168.10.12] INFO[0168] [cleanup] Successfully started [rke-log-cleaner] container on host [192.168.10.14] INFO[0168] Removing container [rke-log-cleaner] on host [192.168.10.14], try #1 INFO[0168] [remove/rke-log-cleaner] Successfully removed container on host [192.168.10.13] INFO[0168] Starting container [rke-log-cleaner] on host [192.168.10.10], try #1 INFO[0168] Removing container [rke-log-cleaner] on host [192.168.10.12], try #1 INFO[0168] [remove/rke-log-cleaner] Successfully removed container on host [192.168.10.12] INFO[0168] [remove/rke-log-cleaner] Successfully removed container on host [192.168.10.14] INFO[0169] [cleanup] Successfully started [rke-log-cleaner] container on host [192.168.10.10] INFO[0169] Removing container [rke-log-cleaner] on host [192.168.10.10], try #1 INFO[0169] [remove/rke-log-cleaner] Successfully removed container on host [192.168.10.10] INFO[0169] [sync] Syncing nodes Labels and Taints INFO[0169] [sync] Successfully synced nodes Labels and Taints INFO[0169] [network] Setting up network plugin: canal INFO[0169] [addons] Saving ConfigMap for addon rke-network-plugin to Kubernetes INFO[0169] [addons] Successfully saved ConfigMap for addon rke-network-plugin to Kubernetes INFO[0169] [addons] Executing deploy job rke-network-plugin INFO[0169] [addons] Setting up coredns INFO[0169] [addons] Saving ConfigMap for addon rke-coredns-addon to Kubernetes INFO[0169] [addons] Successfully saved ConfigMap for addon rke-coredns-addon to Kubernetes INFO[0169] [addons] Executing deploy job rke-coredns-addon INFO[0169] [addons] CoreDNS deployed successfully INFO[0169] [dns] DNS provider coredns deployed successfully INFO[0169] [addons] Setting up Metrics Server INFO[0169] [addons] Saving ConfigMap for addon rke-metrics-addon to Kubernetes INFO[0169] [addons] Successfully saved ConfigMap for addon rke-metrics-addon to Kubernetes INFO[0169] [addons] Executing deploy job rke-metrics-addon INFO[0169] [addons] Metrics Server deployed successfully INFO[0169] [ingress] Setting up nginx ingress controller INFO[0169] [ingress] removing admission batch jobs if they exist INFO[0169] [addons] Saving ConfigMap for addon rke-ingress-controller to Kubernetes INFO[0169] [addons] Successfully saved ConfigMap for addon rke-ingress-controller to Kubernetes INFO[0169] [addons] Executing deploy job rke-ingress-controller INFO[0169] [ingress] removing default backend service and deployment if they exist INFO[0169] [ingress] ingress controller nginx deployed successfully INFO[0169] [addons] Setting up user addons INFO[0169] [addons] no user addons defined INFO[0169] Finished building Kubernetes cluster successfully [root@master01 rancher]# kubectl get nodes NAME STATUS ROLES AGE VERSION 192.168.10.10 Ready controlplane 51m v1.21.9 192.168.10.12 Ready worker 51m v1.21.9 192.168.10.13 Ready worker 62s v1.21.9 192.168.10.14 Ready etcd 51m v1.21.9

12.2 移除worker節(jié)點(diǎn)

修改cluster.yml文件,將對(duì)應(yīng)節(jié)點(diǎn)信息刪除即可。

# vim cluster.yml ...... - address: 192.168.10.13port: "22"internal_address: ""role:- workerhostname_override: user: "rancher"docker_socket: /var/run/docker.sockssh_key: ""ssh_key_path: ~/.ssh/id_rsassh_cert: ""ssh_cert_path: ""labels: {}taints: [] ...... [root@master01 rancher]# rke up --update-only [root@master01 rancher]# kubectl get nodes NAME STATUS ROLES AGE VERSION 192.168.10.10 Ready controlplane 53m v1.21.9 192.168.10.12 Ready worker 53m v1.21.9 192.168.10.14 Ready etcd 53m v1.21.9

但是worker節(jié)點(diǎn)上的pod是沒(méi)有結(jié)束運(yùn)行的。如果節(jié)點(diǎn)被重復(fù)使用,那么在創(chuàng)建新的 Kubernetes 集群時(shí),將自動(dòng)刪除 pod。

[root@worker02 ~]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES b96aa2ac2c25 rancher/nginx-ingress-controller "/usr/bin/dumb-init …" 3 minutes ago Up 3 minutes k8s_controller_nginx-ingress-controller-wxzv4_ingress-nginx_2f6d0569-6a92-4208-8fae-f46b23f2b123_0 f8e7f496e9af rancher/mirrored-coredns-coredns "/coredns -conf /etc…" 3 minutes ago Up 3 minutes k8s_coredns_coredns-8578b6dbdd-xqqdd_kube-system_f10a7413-1f1a-44bf-9070-b7420c296a39_0 7df4ce7aad96 rancher/mirrored-coreos-flannel "/opt/bin/flanneld -…" 3 minutes ago Up 3 minutes k8s_kube-flannel_canal-6m2wj_kube-system_5a55b012-e6ba-4b41-aee4-323a7ce99871_0 38693983ea9c rancher/mirrored-calico-node "start_runit" 4 minutes ago Up 3 minutes k8s_calico-node_canal-6m2wj_kube-system_5a55b012-e6ba-4b41-aee4-323a7ce99871_0 c45bdddaba81 rancher/mirrored-pause:3.5 "/pause" 4 minutes ago Up 3 minutes k8s_POD_nginx-ingress-controller-wxzv4_ingress-nginx_2f6d0569-6a92-4208-8fae-f46b23f2b123_29 7d97152ec302 rancher/mirrored-pause:3.5 "/pause" 4 minutes ago Up 3 minutes k8s_POD_coredns-8578b6dbdd-xqqdd_kube-system_f10a7413-1f1a-44bf-9070-b7420c296a39_31 ea385d73aab9 rancher/mirrored-pause:3.5 "/pause" 5 minutes ago Up 5 minutes k8s_POD_canal-6m2wj_kube-system_5a55b012-e6ba-4b41-aee4-323a7ce99871_0

12.3 增加etcd節(jié)點(diǎn)

12.3.1 主機(jī)準(zhǔn)備

與添加worker節(jié)點(diǎn)主機(jī)準(zhǔn)備一致,可單獨(dú)準(zhǔn)備2臺(tái)新主機(jī)。

12.3.2 修改cluster.yml文件

# vim cluster.yml ...... - address: 192.168.10.15port: "22"internal_address: ""role:- etcdhostname_override: ""user: rancherdocker_socket: /var/run/docker.sockssh_key: ""ssh_key_path: ~/.ssh/id_rsassh_cert: ""ssh_cert_path: ""labels: {}taints: [] - address: 192.168.10.16port: "22"internal_address: ""role:- etcdhostname_override: ""user: rancherdocker_socket: /var/run/docker.sockssh_key: ""ssh_key_path: ~/.ssh/id_rsassh_cert: ""ssh_cert_path: ""labels: {}taints: [] ......

12.3.3 執(zhí)行rke up命令

# rke up --update-only

12.3.4 查看添加結(jié)果

[root@master01 rancher]# kubectl get nodes NAME STATUS ROLES AGE VERSION 192.168.10.10 Ready controlplane 114m v1.21.9 192.168.10.12 Ready worker 114m v1.21.9 192.168.10.14 Ready etcd 114m v1.21.9 192.168.10.15 Ready etcd 99s v1.21.9 192.168.10.16 Ready etcd 85s v1.21.9 [root@etcd01 ~]# docker exec -it etcd /bin/sh # etcdctl member list 746b681e35b1537c, started, etcd-192.168.10.16, https://192.168.10.16:2380, https://192.168.10.16:2379, false b07954b224ba7459, started, etcd-192.168.10.15, https://192.168.10.15:2380, https://192.168.10.15:2379, false e94295bf0a471a67, started, etcd-192.168.10.14, https://192.168.10.14:2380, https://192.168.10.14:2379, false

十三、部署應(yīng)用

13.1 創(chuàng)建資源清單文件

# vim nginx.yaml # cat nginx.yaml --- apiVersion: apps/v1 kind: Deployment metadata:name: nginx-test spec:selector:matchLabels:app: nginxenv: testowner: rancherreplicas: 2 # tells deployment to run 2 pods matching the templatetemplate:metadata:labels:app: nginxenv: testowner: rancherspec:containers:- name: nginx-testimage: nginx:1.19.9ports:- containerPort: 80 # kubectl apply -f nginx.yaml # vim nginx-service.yaml # cat nginx-service.yaml --- apiVersion: v1 kind: Service metadata:name: nginx-testlabels:run: nginx spec:type: NodePortports:- port: 80protocol: TCPselector:owner: rancher # kubectl apply -f nginx-service.yaml

13.2 驗(yàn)證

# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-test-7d95fb4447-6k4p9 1/1 Running 0 55s 10.42.2.11 192.168.10.12 <none> <none> nginx-test-7d95fb4447-sfnsk 1/1 Running 0 55s 10.42.2.10 192.168.10.12 <none> <none> # kubectl get svc -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 120m <none> nginx-test NodePort 10.43.158.143 <none> 80:32669/TCP 2m22s owner=rancher

總結(jié)

以上是生活随笔為你收集整理的k8s.5-使用RKE构建企业生产级Kubernetes集群的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。

如果覺(jué)得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。