日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

实战:部署一套完整的企业级高可用K8s集群(成功测试)-2021.10.20

發(fā)布時間:2023/12/20 编程问答 30 豆豆
生活随笔 收集整理的這篇文章主要介紹了 实战:部署一套完整的企业级高可用K8s集群(成功测试)-2021.10.20 小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.

更新時間

2022年10月14日18:17:39

實驗環(huán)境

實驗環(huán)境: 1、win10,vmwrokstation虛機; 2、k8s集群:3臺centos7.6 1810虛機,2個master節(jié)點,1個node節(jié)點 k8s version:v1.20CONTAINER-RUNTIME:docker://20.10.7

1、硬件環(huán)境

3臺虛機 2c2g,20g。(nat模式,可訪問外網(wǎng))

角色主機名ip
master節(jié)點k8s-master1172.29.9.41
master節(jié)點k8s-master2172.29.9.42
node節(jié)點k8s-node1172.29.9.43
VIP/172.29.9.88

👉 注意:

本次復(fù)用3個k8s node節(jié)點來模擬etcd節(jié)點;(注意:這里是測試環(huán)境,etcd集群就復(fù)用了k8s的3個節(jié)點,實際工作環(huán)境,可以單獨使用機器來組成etcd集群;)

2個master節(jié)點來做高可用;

1個工作節(jié)點來跑負(fù)載;

2、軟件環(huán)境

軟件版本
操作系統(tǒng)centos7.6_x64 1810 mini(其他centos7.x版本也行)
docker20.10.7-ce
kubernetesv1.20.0

3、架構(gòu)圖

  • 理論圖:

  • 實際拓?fù)鋱D:

實驗軟件

鏈接:https://pan.baidu.com/s/1-QDyJBsJizN8SbBHAp-JXQ

提取碼:1b25

實驗軟件:部署一套完整的企業(yè)級高可用K8s集群-20211020

1、基礎(chǔ)環(huán)境配置

👉 all節(jié)點均要配置

1.基礎(chǔ)信息配置

systemctl stop firewalld && systemctl disable firewalld systemctl stop NetworkManager && systemctl disable NetworkManagersetenforce 0 sed -i s/SELINUX=enforcing/SELINUX=disabled/ /etc/selinux/configswapoff -a sed -ri 's/.*swap.*/#&/' /etc/fstabcat >> /etc/hosts << EOF 172.29.9.41 k8s-master1 172.29.9.42 k8s-master2 172.29.9.43 k8s-node1 EOFcat > /etc/sysctl.d/k8s.conf << EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl --system yum install ntpdate -y ntpdate time.windows.com

2.配置3個節(jié)點的主機名

hostnamectl --static set-hostname k8s-master1 bashhostnamectl --static set-hostname k8s-master2 bashhostnamectl --static set-hostname k8s-node1 bash

3.配置免密

3臺機器做一個免密配置:(方便后期從一臺機器往剩余機器快速傳輸文件)

#本次在k8s-master1機器上做操作:ssh-keygen #連續(xù)回車即可 ssh-copy-id -i ~/.ssh/id_rsa.pub root@172.29.9.42 ssh-copy-id -i ~/.ssh/id_rsa.pub root@172.29.9.43

4.上傳本次所需軟件

將本次所需軟件上傳到k8s-master1節(jié)點:

👉 做個快照

此時,3個節(jié)點的初始化環(huán)境配置好了,都記得做一個快照!

2、部署Nginx+Keepalived高可用負(fù)載均衡器

👉 (只需在2個master節(jié)點配置即可)

1.安裝軟件包

👉 (master主備節(jié)點都要配置)

yum install epel-release -y yum install nginx keepalived -y

2.Nginx配置文件

👉 (master主,備節(jié)點都要配置)

cat > /etc/nginx/nginx.conf << "EOF" user nginx; worker_processes auto; error_log /var/log/nginx/error.log; pid /run/nginx.pid;include /usr/share/nginx/modules/*.conf;events {worker_connections 1024; }# 四層負(fù)載均衡,為兩臺Master apiserver組件提供負(fù)載均衡 # 這個strem是為nginx4層負(fù)載均衡的一個模塊,不使用的https 7層的負(fù)載均衡; stream {log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';access_log /var/log/nginx/k8s-access.log main;upstream k8s-apiserver {server 172.29.9.41:6443; # Master1 APISERVER IP:PORT,修改為本次master節(jié)點的ip即可server 172.29.9.42:6443; # Master2 APISERVER IP:PORT}server {listen 16443; # 由于nginx與master節(jié)點復(fù)用,這個監(jiān)聽端口不能是6443,否則會沖突proxy_pass k8s-apiserver;} }http {log_format main '$remote_addr - $remote_user [$time_local] "$request" ''$status $body_bytes_sent "$http_referer" ''"$http_user_agent" "$http_x_forwarded_for"';access_log /var/log/nginx/access.log main;sendfile on;tcp_nopush on;tcp_nodelay on;keepalive_timeout 65;types_hash_max_size 2048;include /etc/nginx/mime.types;default_type application/octet-stream; } EOF

3.keepalived配置文件

1、Nginx Master上配置

cat > /etc/keepalived/keepalived.conf << EOF global_defs { notification_email { acassen@firewall.loc failover@firewall.loc sysadmin@firewall.loc } notification_email_from Alexandre.Cassen@firewall.loc smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id NGINX_MASTER } vrrp_script check_nginx {script "/etc/keepalived/check_nginx.sh" }vrrp_instance VI_1 { state MASTER interface ens33 # 修改為實際網(wǎng)卡名virtual_router_id 51 # VRRP 路由 ID實例,每個實例是唯一的 priority 100 # 優(yōu)先級,備服務(wù)器設(shè)置 90 advert_int 1 # 指定VRRP 心跳包通告間隔時間,默認(rèn)1秒 authentication { auth_type PASS auth_pass 1111 } # 虛擬IPvirtual_ipaddress { 172.29.9.88/16} track_script {check_nginx} } EOF

? vrrp_script:指定檢查nginx工作狀態(tài)腳本(根據(jù)nginx狀態(tài)判斷是否故障轉(zhuǎn)移)

? virtual_ipaddress:虛擬IP(VIP)

準(zhǔn)備上述配置文件中檢查nginx運行狀態(tài)的腳本:

cat > /etc/keepalived/check_nginx.sh << "EOF" #!/bin/bash count=$(ss -antp |grep 16443 |egrep -cv "grep|$$")if [ "$count" -eq 0 ];thenexit 1 elseexit 0 fi EOFchmod +x /etc/keepalived/check_nginx.sh

2、Nginx Backup上配置

cat > /etc/keepalived/keepalived.conf << EOF global_defs { notification_email { acassen@firewall.loc failover@firewall.loc sysadmin@firewall.loc } notification_email_from Alexandre.Cassen@firewall.loc smtp_server 127.0.0.1 smtp_connect_timeout 30 router_id NGINX_BACKUP } vrrp_script check_nginx {script "/etc/keepalived/check_nginx.sh" }vrrp_instance VI_1 { state BACKUP interface ens33virtual_router_id 51 # VRRP 路由 ID實例,每個實例是唯一的 priority 90 #backup這里為90advert_int 1authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 172.29.9.88/16 #VIP} track_script {check_nginx} } EOF
  • 準(zhǔn)備上述配置文件中檢查nginx運行狀態(tài)的腳本:
cat > /etc/keepalived/check_nginx.sh << "EOF" #!/bin/bash count=$(ss -antp |grep 16443 |egrep -cv "grep|$$")if [ "$count" -eq 0 ];thenexit 1 elseexit 0 fi EOFchmod +x /etc/keepalived/check_nginx.sh

注意:

注:keepalived根據(jù)腳本返回狀態(tài)碼(0為工作正常,非0不正常)判斷是否故障轉(zhuǎn)移。

4.啟動并設(shè)置開機啟動

👉 (2個master節(jié)點上都要配置)

systemctl daemon-reload systemctl start nginx systemctl start keepalived systemctl enable nginx systemctl enable keepalived
  • 注意:在啟動nginx時會報錯

我們通過journalctl -u nginx發(fā)現(xiàn)ngixn報錯原因:是現(xiàn)在的nginx版本里面不包含stream模塊了,從而導(dǎo)致報錯:

  • 可能需要我們單獨去裝下了:
[root@k8s-master1 ~]#yum search stream|grep nginx nginx-mod-stream.x86_64 : Nginx stream modulesyum install -y nginx-mod-stream
  • 再次啟動:就ok了,這2個步驟在2個master節(jié)點都執(zhí)行一遍。
systemctl daemon-reload systemctl start nginx systemctl start keepalived systemctl enable nginx systemctl enable keepalived

5.查看keepalived工作狀態(tài)

在master節(jié)點查看:

可以看到,在ens33網(wǎng)卡綁定了172.29.9…88 虛擬IP,說明工作正常。

6.Nginx+Keepalived高可用測試

  • 測試方法:

關(guān)閉主節(jié)點Nginx,測試VIP是否漂移到備節(jié)點服務(wù)器。

在Nginx Master執(zhí)行 pkill nginx

在Nginx Backup,ip addr命令查看已成功綁定VIP。

  • 實際測試過程

首先我們在nginx master節(jié)點上看下ip情況:

再從winodws上長ping下這個VIP:

此時,在nginx master節(jié)點上查看nginx狀態(tài),并執(zhí)行pkill nginx命令:

[root@k8s-master1 ~]#ss -antup|grep nginx tcp LISTEN 0 128 *:16443 *:* users:(("nginx",pid=25229,fd=7),("nginx",pid=25228,fd=7),("nginx",pid=25227,fd=7)) [root@k8s-master1 ~]#pkill nginx [root@k8s-master1 ~]#ss -antup|grep nginx [root@k8s-master1 ~]#

再在nginx master節(jié)點上確認(rèn)nginx狀態(tài),及看下ping測試情況:此時會丟一個包的。

在Nginx Backup,ip addr命令查看已成功綁定VIP:

符合預(yù)期現(xiàn)象。

此時再啟動master節(jié)點上的nginx,并再次觀察現(xiàn)象:

注意:keepalived在切換VIP時會丟1個包的。

👉 結(jié)論

01、master節(jié)點和backup節(jié)點都是有nginx服務(wù)在運行的;

02、keepalived在切換VIP時會丟1個包的; Keepalived是一個主流高可用軟件,基于VIP綁定實現(xiàn)服務(wù)器雙機熱備,在上述拓?fù)渲?#xff0c;Keepalived主要根據(jù)Nginx運行狀態(tài)判斷是否需要故障轉(zhuǎn)移(偏移VIP),例如當(dāng)Nginx主節(jié)點掛掉,VIP會自動綁定在Nginx備節(jié)點,從而保證VIP一直可用,實現(xiàn)Nginx高可用。

03、注意nginx服務(wù);

3、部署Etcd集群

👉 (只需在etcd節(jié)點配置即可,但本次復(fù)用3個node節(jié)點來作為etcd使用,因此三個都需要配置)

1.準(zhǔn)備cfssl證書生成工具

cfssl是一個開源的證書管理工具,使用json文件生成證書,相比openssl更方便使用。

找任意一臺服務(wù)器操作,這里用k8s-master1節(jié)點。

1、自己下載軟件方法 wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 #注意,以上鏈接若打不開,直接使用我提供的軟件即可!chmod +x cfssl* for x in cfssl*; do mv $x ${x%*_linux-amd64}; done mv cfssl* /usr/bin2、使用我的軟件: #上傳我的軟件到機器上 mv cfssl* /usr/bin

2. 生成Etcd證書

1. 自簽證書頒發(fā)機構(gòu)(CA)

  • 創(chuàng)建工作目錄:
mkdir -p ~/etcd_tls cd ~/etcd_tls
  • 自簽CA:
cat > ca-config.json << EOF {"signing": {"default": {"expiry": "87600h"},"profiles": {"www": {"expiry": "87600h","usages": ["signing","key encipherment","server auth","client auth"]}}} } EOFcat > ca-csr.json << EOF {"CN": "etcd CA","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "Beijing","ST": "Beijing"}] } EOF
  • 生成證書:
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

會生成ca.pem和ca-key.pem文件。

2. 使用自簽CA簽發(fā)Etcd HTTPS證書

  • 創(chuàng)建證書申請文件:
cat > server-csr.json << EOF {"CN": "etcd","hosts": ["172.29.9.41","172.29.9.42","172.29.9.43"],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "BeiJing","ST": "BeiJing"}] } EOF

注意:上述文件hosts字段中IP為所有etcd節(jié)點的集群內(nèi)部通信IP,一個都不能少!為了方便后期擴容可以多寫幾個預(yù)留的IP。多寫幾個etcd ip可以,但是少寫的話,如果后期要擴容,就需要重新生成證書,比較麻煩些;

  • 生成證書:
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server

會生成server.pem和server-key.pem文件。

3.從Github下載二進(jìn)制文件

下載地址:https://github.com/etcd-io/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz

這里直接使用我提供的軟件即可!

  • 將etcd軟件上傳到k8s-master節(jié)點上;
[root@k8s-master1 ~]#ll total 18044 -r--------. 1 root root 545894 May 30 10:57 centos7-init.zip drwxr-xr-x 2 root root 174 Oct 19 22:28 etcd_tls -rw-r--r-- 1 root root 17364053 Oct 19 22:31 etcd-v3.4.9-linux-amd64.tar.gz -rw-r--r--. 1 root root 560272 May 30 10:48 wget-1.14-18.el7_6.1.x86_64.rpm [root@k8s-master1 ~]#

4.部署Etcd集群

👉 (以下在節(jié)點1上操作,為簡化操作,待會將節(jié)點1生成的所有文件拷貝到節(jié)點2和節(jié)點3。)

1.創(chuàng)建工作目錄并解壓二進(jìn)制包

mkdir /opt/etcd/{bin,cfg,ssl} -p cd /root/ tar zxvf etcd-v3.4.9-linux-amd64.tar.gz mv etcd-v3.4.9-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/

2. 創(chuàng)建etcd配置文件

🍀 注釋版:

cat > /opt/etcd/cfg/etcd.conf << EOF #[Member] ETCD_NAME="etcd-1" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://172.29.9.41:2380" #2380是 集群通信的端口; ETCD_LISTEN_CLIENT_URLS="https://172.29.9.41:2379" #2379是指它的數(shù)據(jù)端口,其他客戶端要訪問etcd數(shù)據(jù)庫的讀寫都走的是這個端口;#[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.29.9.41:2380" ETCD_ADVERTISE_CLIENT_URLS="https://172.29.9.41:2379" ETCD_INITIAL_CLUSTER="etcd-1=https://172.29.9.41:2380,etcd-2=https://172.29.9.42:2380,etcd-3=https://172.29.9.43:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" #一種簡單的認(rèn)證機制,網(wǎng)絡(luò)里可能配置了多套k8s集群,防止誤同步; ETCD_INITIAL_CLUSTER_STATE="new" EOF

?ETCD_NAME:節(jié)點名稱,集群中唯一

?ETCD_DATADIR:數(shù)據(jù)目錄

?ETCD_LISTEN_PEER_URLS:集群通信監(jiān)聽地址

?ETCD_LISTEN_CLIENT_URLS:客戶端訪問監(jiān)聽地址

?ETCD_INITIAL_ADVERTISE_PEER_URLS:集群通告地址

?ETCD_ADVERTISE_CLIENT_URLS:客戶端通告地址

?ETCD_INITIAL_CLUSTER:集群節(jié)點地址

?ETCD_INITIAL_CLUSTER_TOKEN:集群Token

?ETCD_INITIAL_CLUSTER_STATE:加入集群的當(dāng)前狀態(tài),new是新集群,existing表示加入已有集群

🍀 最終配置版:

cat > /opt/etcd/cfg/etcd.conf << EOF #[Member] ETCD_NAME="etcd-1" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://172.29.9.41:2380" ETCD_LISTEN_CLIENT_URLS="https://172.29.9.41:2379"#[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.29.9.41:2380" ETCD_ADVERTISE_CLIENT_URLS="https://172.29.9.41:2379" ETCD_INITIAL_CLUSTER="etcd-1=https://172.29.9.41:2380,etcd-2=https://172.29.9.42:2380,etcd-3=https://172.29.9.43:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" EOF

3.systemd管理etcd

cat > /usr/lib/systemd/system/etcd.service << EOF [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target[Service] Type=notify EnvironmentFile=/opt/etcd/cfg/etcd.conf ExecStart=/opt/etcd/bin/etcd \ --cert-file=/opt/etcd/ssl/server.pem \ --key-file=/opt/etcd/ssl/server-key.pem \ --trusted-ca-file=/opt/etcd/ssl/ca.pem \ --peer-cert-file=/opt/etcd/ssl/server.pem \ --peer-key-file=/opt/etcd/ssl/server-key.pem \ --peer-trusted-ca-file=/opt/etcd/ssl/ca.pem \ --logger=zap Restart=on-failure LimitNOFILE=65536[Install] WantedBy=multi-user.target EOF

4.拷貝剛才生成的證書

把剛才生成的證書拷貝到配置文件中的路徑:

cp ~/etcd_tls/ca*pem ~/etcd_tls/server*pem /opt/etcd/ssl/

5.啟動并設(shè)置開機啟動

systemctl daemon-reload systemctl start etcd systemctl enable etcd

👉 注意:第一個etcd服務(wù)啟動很定會很慢,肯定會失敗的;

為什么呢?

journalctl -u etcd -f #查看日志

dial tcp 172.29.9.43:2380: conet: connection refused

注意:

通過查看日志可以看到,鏈接etcd另外2個節(jié)點報錯,因此另外2個etcd節(jié)點的服務(wù)也需要起起來才行。

6.將上面節(jié)點1所有生成的文件拷貝到節(jié)點2和節(jié)點3

scp -r /opt/etcd/ root@172.29.9.42:/opt/ scp /usr/lib/systemd/system/etcd.service root@172.29.9.42:/usr/lib/systemd/system/scp -r /opt/etcd/ root@172.29.9.43:/opt/ scp /usr/lib/systemd/system/etcd.service root@172.29.9.43:/usr/lib/systemd/system/

然后在節(jié)點2和節(jié)點3分別修改etcd.conf配置文件中的節(jié)點名稱和當(dāng)前服務(wù)器IP:

vi /opt/etcd/cfg/etcd.conf #[Member] ETCD_NAME="etcd-1" # 修改此處,節(jié)點2改為etcd-2,節(jié)點3改為etcd-3 ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://192.168.31.71:2380" # 修改此處為當(dāng)前服務(wù)器IP ETCD_LISTEN_CLIENT_URLS="https://192.168.31.71:2379" # 修改此處為當(dāng)前服務(wù)器IP#[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.31.71:2380" # 修改此處為當(dāng)前服務(wù)器IP ETCD_ADVERTISE_CLIENT_URLS="https://192.168.31.71:2379" # 修改此處為當(dāng)前服務(wù)器IP ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.31.71:2380,etcd-2=https://192.168.31.72:2380,etcd-3=https://192.168.31.73:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new"
  • 最終配置:
#k8s-node1-172.29.9.42上配置 cat > /opt/etcd/cfg/etcd.conf << EOF #[Member] ETCD_NAME="etcd-2" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://172.29.9.42:2380" ETCD_LISTEN_CLIENT_URLS="https://172.29.9.42:2379"#[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.29.9.42:2380" ETCD_ADVERTISE_CLIENT_URLS="https://172.29.9.42:2379" ETCD_INITIAL_CLUSTER="etcd-1=https://172.29.9.41:2380,etcd-2=https://172.29.9.42:2380,etcd-3=https://172.29.9.43:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" EOF#k8s-node1-172.29.9.43上配置 cat > /opt/etcd/cfg/etcd.conf << EOF #[Member] ETCD_NAME="etcd-3" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://172.29.9.43:2380" ETCD_LISTEN_CLIENT_URLS="https://172.29.9.43:2379"#[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.29.9.43:2380" ETCD_ADVERTISE_CLIENT_URLS="https://172.29.9.43:2379" ETCD_INITIAL_CLUSTER="etcd-1=https://172.29.9.41:2380,etcd-2=https://172.29.9.42:2380,etcd-3=https://172.29.9.43:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" EOF
  • 最后啟動etcd并設(shè)置開機啟動,同上。
systemctl daemon-reload systemctl start etcd systemctl enable etcd

7. 查看集群狀態(tài)

[root@k8s-master1 ~]#ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://172.29.9.41:2379,https://172.29.9.42:2379,https://172.29.9.43:2379" endpoint health --write-out=table

如果輸出上面信息,就說明集群部署成功。

如果有問題第一步先看日志:/var/log/message 或 journalctl -u etcd

4、安裝Docker/kubeadm/kubelet

1.安裝Docker

yum install -y yum-utils device-mapper-persistent-data lvm2 yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repoyum install -y yum install docker-ce-20.10.7 docker-ce-cli-20.10.7 containerd.iosystemctl start docker && systemctl enable dockermkdir -p /etc/docker tee /etc/docker/daemon.json <<-'EOF' {"registry-mirrors":["https://dockerhub.azk8s.cn","http://hub-mirror.c.163.com","http://qtid6917.mirror.aliyuncs.com"] } EOFecho "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf sysctl -psystemctl daemon-reload systemctl restart docker

2.添加阿里云YUM軟件源

cat > /etc/yum.repos.d/kubernetes.repo << EOF [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF

3.安裝kubeadm,kubelet和kubectl

由于版本更新頻繁,這里指定版本號部署:

yum install -y kubelet-1.20.0 kubeadm-1.20.0 kubectl-1.20.0 systemctl enable kubelet

5、部署Kubernetes Master

1.初始化Master1

👉 (在k8s-master1上操作)

  • 生成初始化配置文件:
cat > kubeadm-config.yaml << EOF apiVersion: kubeadm.k8s.io/v1beta2 bootstrapTokens: - groups:- system:bootstrappers:kubeadm:default-node-tokentoken: 9037x2.tcaqnpaqkra9vsbwttl: 24h0m0susages:- signing- authentication kind: InitConfiguration localAPIEndpoint:advertiseAddress: 172.29.9.41bindPort: 6443 nodeRegistration:criSocket: /var/run/dockershim.sockname: k8s-master1taints:- effect: NoSchedulekey: node-role.kubernetes.io/master --- apiServer:certSANs: # 包含所有Master/LB/VIP IP,一個都不能少!為了方便后期擴容可以多寫幾個預(yù)留的IP。- k8s-master1- k8s-master2- 172.29.9.41- 172.29.9.42- 172.29.9.88- 127.0.0.1extraArgs:authorization-mode: Node,RBACtimeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta2 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controlPlaneEndpoint: 172.29.9.88:16443 # 負(fù)載均衡虛擬IP(VIP)和端口 controllerManager: {} dns:type: CoreDNS etcd:external: # 使用外部etcdendpoints:- https://172.29.9.41:2379 # etcd集群3個節(jié)點- https://172.29.9.42:2379- https://172.29.9.43:2379caFile: /opt/etcd/ssl/ca.pem # 連接etcd所需證書certFile: /opt/etcd/ssl/server.pemkeyFile: /opt/etcd/ssl/server-key.pem imageRepository: registry.aliyuncs.com/google_containers # 由于默認(rèn)拉取鏡像地址k8s.gcr.io國內(nèi)無法訪問,這里指定阿里云鏡像倉庫地址 kind: ClusterConfiguration kubernetesVersion: v1.20.0 # K8s版本,與上面安裝的一致 networking:dnsDomain: cluster.localpodSubnet: 10.244.0.0/16 # Pod網(wǎng)絡(luò),與下面部署的CNI網(wǎng)絡(luò)組件yaml中保持一致serviceSubnet: 10.96.0.0/12 # 集群內(nèi)部虛擬網(wǎng)絡(luò),Pod統(tǒng)一訪問入口 scheduler: {} EOF
  • 使用配置文件引導(dǎo):
[root@k8s-master1 ~]#kubeadm init --config kubeadm-config.yaml [init] Using Kubernetes version: v1.20.0 [preflight] Running pre-flight checks[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.9. Latest validated version: 19.03 [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [k8s-master1 k8s-master2 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.29.9.41 172.29.9.88 172.29.9.42 127.0.0.1] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] External etcd mode: Skipping etcd/ca certificate authority generation [certs] External etcd mode: Skipping etcd/server certificate generation [certs] External etcd mode: Skipping etcd/peer certificate generation [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [kubeconfig] Writing "admin.conf" kubeconfig file [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [kubeconfig] Writing "kubelet.conf" kubeconfig file [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [kubeconfig] Writing "controller-manager.conf" kubeconfig file [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 17.036041 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node k8s-master1 as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)" [mark-control-plane] Marking the node k8s-master1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: 9037x2.tcaqnpaqkra9vsbw [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address [addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of control-plane nodes by copying certificate authorities and service account keys on each node and then running the following as root:kubeadm join 172.29.9.88:16443 --token 9037x2.tcaqnpaqkra9vsbw \--discovery-token-ca-cert-hash sha256:b83d62021daef2cd62c0c19ee0f45adf574c2eaf1de28f0e6caafdabdf95951d \--control-planeThen you can join any number of worker nodes by running the following on each as root:kubeadm join 172.29.9.88:16443 --token 9037x2.tcaqnpaqkra9vsbw \--discovery-token-ca-cert-hash sha256:b83d62021daef2cd62c0c19ee0f45adf574c2eaf1de28f0e6caafdabdf95951d

初始化完成后,會有兩個join的命令,帶有 --control-plane 是用于加入組建多master集群的,不帶的是加入節(jié)點的。

拷貝kubectl使用的連接k8s認(rèn)證文件到默認(rèn)路徑:

mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config[root@k8s-master1 ~]#kubectl get node NAME STATUS ROLES AGE VERSION k8s-master1 NotReady control-plane,master 7m34s v1.20.0 [root@k8s-master1 ~]#

2.初始化Master2

此時,若我們直接在k8s-master2節(jié)點上使用提示的命令join時會報錯:

注意:我們再加入master2,執(zhí)行這條命令的時候,實際上只是初始化生成了一些配置文件而已,但是他那個證書,它沒有再給你重新初始化了,因為你作為第二個控制面板加入集群了,它證書不再給你重新初始化,這主要是因為你是同一個集群,因為它再重新給你初始化根證書等證書,會導(dǎo)致你集群里的證書不一致現(xiàn)象,那么就會帶來后面很多關(guān)于證書方面的問題。所以我們要部署第二個節(jié)點的時候,我們要把第一個節(jié)點的證書拷貝過來,就不能再重新生成一套獨立的證書了,包括后面的授權(quán)等都是基于這1套證書去實現(xiàn)的。

  • 將Master1節(jié)點生成的證書拷貝到Master2:
scp -r /etc/kubernetes/pki/ 172.29.9.42:/etc/kubernetes/
  • 復(fù)制加入master join命令在master2執(zhí)行:
kubeadm join 172.29.9.88:16443 --token 9037x2.tcaqnpaqkra9vsbw \--discovery-token-ca-cert-hash sha256:b83d62021daef2cd62c0c19ee0f45adf574c2eaf1de28f0e6caafdabdf95951d \--control-plane

此時觀看,發(fā)現(xiàn)加入成功!

拷貝kubectl使用的連接k8s認(rèn)證文件到默認(rèn)路徑:

mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config[root@k8s-master2 ~]#kubectl get node NAME STATUS ROLES AGE VERSION k8s-master1 NotReady control-plane,master 14m v1.20.0 k8s-master2 NotReady control-plane,master 30s v1.20.0 [root@k8s-master2 ~]#

注:由于網(wǎng)絡(luò)插件還沒有部署,還沒有準(zhǔn)備就緒 NotReady。

3.訪問負(fù)載均衡器測試

找K8s集群中任意一個節(jié)點,使用curl查看K8s版本測試,使用VIP訪問:

本次使用k8s-master2節(jié)點跑這個測試命令:多執(zhí)行幾次。

[root@k8s-master2 ~]#curl -k https://172.29.9.88:16443/version {"major": "1","minor": "20","gitVersion": "v1.20.0","gitCommit": "af46c47ce925f4c4ad5cc8d1fca46c7b77d13b38","gitTreeState": "clean","buildDate": "2020-12-08T17:51:19Z","goVersion": "go1.15.5","compiler": "gc","platform": "linux/amd64" }[root@k8s-master2 ~]#

可以正確獲取到K8s版本信息,說明負(fù)載均衡器搭建正常。該請求數(shù)據(jù)流程:curl -> vip(nginx) -> apiserver

  • 通過查看Nginx日志也可以看到轉(zhuǎn)發(fā)apiserver IP:這個在k8s-master1節(jié)點上測試
tail /var/log/nginx/k8s-access.log -f

測試成功。

6、加入Kubernetes Node

👉 在172.29.9.43(Node1)執(zhí)行。

  • 向集群添加新節(jié)點,執(zhí)行在kubeadm init輸出的kubeadm join命令:
[root@k8s-master1 ~]#kubeadm token create --print-join-command kubeadm join 172.29.9.88:16443 --token vh0mrh.9s60jligjkrduacj --discovery-token-ca-cert-hash sha256:b83d62021daef2cd62c0c19ee0f45adf574c2eaf1de28f0e6caafdabdf95951d [root@k8s-master1 ~]#

后續(xù)其他節(jié)點也是這樣加入。

注:默認(rèn)token有效期為24小時,當(dāng)過期之后,該token就不可用了。這時就需要重新創(chuàng)建token,可以直接在master節(jié)點直接使用命令快捷生成:kubeadm token create --print-join-command

7、部署網(wǎng)絡(luò)組件

Calico是一個純?nèi)龑拥臄?shù)據(jù)中心網(wǎng)絡(luò)方案,是目前Kubernetes主流的網(wǎng)絡(luò)方案。

  • 部署Calico:
kubectl apply -f calico.yaml kubectl get pod -A

  • 等Calico Pod都Running,節(jié)點也會準(zhǔn)備就緒:
[root@k8s-master1 ~]#kubectl get node NAME STATUS ROLES AGE VERSION k8s-master1 Ready control-plane,master 29m v1.20.0 k8s-master2 Ready control-plane,master 24m v1.20.0 k8s-node1 Ready <none> 16m v1.20.0 [root@k8s-master1 ~]#

8、部署 Dashboard

Dashboard是官方提供的一個UI,可用于基本管理K8s資源。

kubectl apply -f kubernetes-dashboard.yaml

查看部署

kubectl get pods -n kubernetes-dashboard

訪問地址:https://NodeIP:30001

創(chuàng)建service account并綁定默認(rèn)cluster-admin管理員集群角色:

kubectl create serviceaccount dashboard-admin -n kube-system kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-adminkubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
  • 使用輸出的token登錄Dashboard。

👉 以上步驟都做完后,記得為這3個節(jié)點做一個快照,方便后面集群損壞,從而快速恢復(fù)集群。

引用

感謝阿良老師的分享。😘

關(guān)于我

我的博客主旨:

  • 排版美觀,語言精煉;
  • 文檔即手冊,步驟明細(xì),拒絕埋坑,提供源碼;
  • 本人實戰(zhàn)文檔都是親測成功的,各位小伙伴在實際操作過程中如有什么疑問,可隨時聯(lián)系本人幫您解決問題,讓我們一起進(jìn)步!

🍀 微信二維碼

x2675263825 (舍得), qq:2675263825。

🍀 微信公眾號

《云原生架構(gòu)師實戰(zhàn)》

🍀 語雀

https://www.yuque.com/xyy-onlyone

🍀 博客

www.onlyyou520.com

🍀 csdn

https://blog.csdn.net/weixin_39246554?spm=1010.2135.3001.5421

🍀 知乎

https://www.zhihu.com/people/foryouone

最后

好了,關(guān)于本次就到這里了,感謝大家閱讀,最后祝大家生活快樂,每天都過的有意義哦,我們下期見!

nt并綁定默認(rèn)cluster-admin管理員集群角色:

kubectl create serviceaccount dashboard-admin -n kube-system kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-adminkubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
  • 使用輸出的token登錄Dashboard。

[外鏈圖片轉(zhuǎn)存中…(img-3jOCsbLY-1665820529257)]

[外鏈圖片轉(zhuǎn)存中…(img-q7ZXbAQV-1665820529257)]

👉 以上步驟都做完后,記得為這3個節(jié)點做一個快照,方便后面集群損壞,從而快速恢復(fù)集群。

引用

感謝阿良老師的分享。😘

關(guān)于我

我的博客主旨:

  • 排版美觀,語言精煉;
  • 文檔即手冊,步驟明細(xì),拒絕埋坑,提供源碼;
  • 本人實戰(zhàn)文檔都是親測成功的,各位小伙伴在實際操作過程中如有什么疑問,可隨時聯(lián)系本人幫您解決問題,讓我們一起進(jìn)步!

🍀 微信二維碼

x2675263825 (舍得), qq:2675263825。

[外鏈圖片轉(zhuǎn)存中…(img-6ejvRoXy-1665820529258)]

🍀 微信公眾號

《云原生架構(gòu)師實戰(zhàn)》

[外鏈圖片轉(zhuǎn)存中…(img-VoegvemB-1665820529258)]

🍀 語雀

https://www.yuque.com/xyy-onlyone

[外鏈圖片轉(zhuǎn)存中…(img-xW6opgtv-1665820529258)]

🍀 博客

www.onlyyou520.com

[外鏈圖片轉(zhuǎn)存中…(img-lGymbKlv-1665820529258)]

[外鏈圖片轉(zhuǎn)存中…(img-weXSRuYZ-1665820529259)]

🍀 csdn

https://blog.csdn.net/weixin_39246554?spm=1010.2135.3001.5421

[外鏈圖片轉(zhuǎn)存中…(img-bpM7EbZK-1665820529259)]

🍀 知乎

https://www.zhihu.com/people/foryouone

[外鏈圖片轉(zhuǎn)存中…(img-A1RVi2RR-1665820529259)]

最后

好了,關(guān)于本次就到這里了,感謝大家閱讀,最后祝大家生活快樂,每天都過的有意義哦,我們下期見!

總結(jié)

以上是生活随笔為你收集整理的实战:部署一套完整的企业级高可用K8s集群(成功测试)-2021.10.20的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯,歡迎將生活随笔推薦給好友。