日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

二进制安装部署 4 kubernetes集群---超详细教程

發(fā)布時(shí)間:2025/3/20 编程问答 36 豆豆
生活随笔 收集整理的這篇文章主要介紹了 二进制安装部署 4 kubernetes集群---超详细教程 小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

二進(jìn)制安裝部署kubernetes集群---超詳細(xì)教程

前言:本篇博客是博主踩過無數(shù)坑,反復(fù)查閱資料,一步步搭建完成后整理的個(gè)人心得,分享給大家~~~

本文所需的安裝包,都上傳在我的網(wǎng)盤中,需要的可以打賞博主一杯咖啡錢,然后私密博主,博主會(huì)很快答復(fù)呦~

00.組件版本和配置策略

00-01.組件版本

  • Kubernetes 1.10.4
  • Docker 18.03.1-ce
  • Etcd 3.3.7
  • Flanneld 0.10.0
  • 插件:
    • Coredns
    • Dashboard
    • Heapster (influxdb、grafana)
    • Metrics-Server
    • EFK (elasticsearch、fluentd、kibana)
  • 鏡像倉庫:
    • docker registry
    • harbor

?

00-02.主要配置策略

kube-apiserver:

  • 使用 keepalived 和 haproxy 實(shí)現(xiàn) 3 節(jié)點(diǎn)高可用;
  • 關(guān)閉非安全端口 8080 和匿名訪問;
  • 在安全端口 6443 接收 https 請求;
  • 嚴(yán)格的認(rèn)證和授權(quán)策略 (x509、token、RBAC);
  • 開啟 bootstrap token 認(rèn)證,支持 kubelet TLS bootstrapping;
  • 使用 https 訪問 kubelet、etcd,加密通信;

kube-controller-manager:

  • 3 節(jié)點(diǎn)高可用;
  • 關(guān)閉非安全端口,在安全端口 10252 接收 https 請求;
  • 使用 kubeconfig 訪問 apiserver 的安全端口;
  • 自動(dòng) approve kubelet 證書簽名請求 (CSR),證書過期后自動(dòng)輪轉(zhuǎn);
  • 各 controller 使用自己的 ServiceAccount 訪問 apiserver;

kube-scheduler:

  • 3 節(jié)點(diǎn)高可用;
  • 使用 kubeconfig 訪問 apiserver 的安全端口;

kubelet:

  • 使用 kubeadm 動(dòng)態(tài)創(chuàng)建 bootstrap token,而不是在 apiserver 中靜態(tài)配置;
  • 使用 TLS bootstrap 機(jī)制自動(dòng)生成 client 和 server 證書,過期后自動(dòng)輪轉(zhuǎn);
  • 在 KubeletConfiguration 類型的 JSON 文件配置主要參數(shù);
  • 關(guān)閉只讀端口,在安全端口 10250 接收 https 請求,對請求進(jìn)行認(rèn)證和授權(quán),拒絕匿名訪問和非授權(quán)訪問;
  • 使用 kubeconfig 訪問 apiserver 的安全端口;

kube-proxy:

  • 使用 kubeconfig 訪問 apiserver 的安全端口;
  • 在 KubeProxyConfiguration 類型的 JSON 文件配置主要參數(shù);
  • 使用 ipvs 代理模式;

集群插件:

  • DNS:使用功能、性能更好的 coredns;
  • Dashboard:支持登錄認(rèn)證;
  • Metric:heapster、metrics-server,使用 https 訪問 kubelet 安全端口;
  • Log:Elasticsearch、Fluend、Kibana;
  • Registry 鏡像庫:docker-registry、harbor;

?

01.系統(tǒng)初始化

01-01.集群機(jī)器

  • kube-master:192.168.10.108
  • kube-node1:192.168.10.109
  • kube-node2:192.168.10.110

本文檔中的 etcd 集群、master 節(jié)點(diǎn)、worker 節(jié)點(diǎn)均使用這三臺(tái)機(jī)器。

?

在每個(gè)服務(wù)器上都要執(zhí)行以下全部操作,如果沒有特殊指明,本文檔的所有操作均在kube-master?節(jié)點(diǎn)上執(zhí)行

01-02.主機(jī)名

1、設(shè)置永久主機(jī)名稱,然后重新登錄

$ sudo hostnamectl set-hostname kube-master

$ sudo hostnamectl set-hostname kube-node1

$ sudo hostnamectl set-hostname kube-node2

?

2、修改 /etc/hostname 文件,添加主機(jī)名和 IP 的對應(yīng)關(guān)系:

$ vim /etc/hosts

192.168.10.108 kube-master

192.168.10.109 kube-node1

192.168.10.110 kube-node2

?

01-03.添加 k8s 和 docker 賬戶

1、在每臺(tái)機(jī)器上添加 k8s 賬戶

$ sudo useradd -m k8s

$ sudo sh -c 'echo along |passwd k8s --stdin' #為k8s 賬戶設(shè)置密碼

?

2、修改visudo權(quán)限

$ sudo visudo #去掉%wheel ALL=(ALL) NOPASSWD: ALL這行的注釋

$ sudo grep '%wheel.*NOPASSWD: ALL' /etc/sudoers

%wheel ALL=(ALL) NOPASSWD: ALL

?

3、將k8s用戶歸到wheel組

$ gpasswd -a k8s wheel

Adding user k8s to group wheel

$ id k8s

uid=1000(k8s) gid=1000(k8s) groups=1000(k8s),10(wheel)

?

4、在每臺(tái)機(jī)器上添加 docker 賬戶,將 k8s 賬戶添加到 docker 組中,同時(shí)配置 dockerd 參數(shù)(注:安裝完docker才有):

$ sudo useradd -m docker

$ sudo gpasswd -a k8s docker

$ sudo mkdir -p /opt/docker/

$ vim /opt/docker/daemon.json? ?#可以后續(xù)部署docker時(shí)在操作

{

  "registry-mirrors": ["https://hub-mirror.c.163.com", "https://docker.mirrors.ustc.edu.cn"],

  "max-concurrent-downloads": 20

}

?

01-04.無密碼 ssh 登錄其它節(jié)點(diǎn)

1、生成秘鑰對

[root@kube-master ~]# ssh-keygen #連續(xù)回車即可

?

2、將自己的公鑰發(fā)給其他服務(wù)器

[root@kube-master ~]# ssh-copy-id root@kube-master

[root@kube-master ~]# ssh-copy-id root@kube-node1

[root@kube-master ~]# ssh-copy-id root@kube-node2

?

[root@kube-master ~]# ssh-copy-id k8s@kube-master

[root@kube-master ~]# ssh-copy-id k8s@kube-node1

[root@kube-master ~]# ssh-copy-id k8s@kube-node2

?

01-05.將可執(zhí)行文件路徑 /opt/k8s/bin 添加到 PATH 變量

在每臺(tái)機(jī)器上添加環(huán)境變量:

$ sudo sh -c "echo 'PATH=/opt/k8s/bin:$PATH:$HOME/bin:$JAVA_HOME/bin' >> /etc/profile.d/k8s.sh"

$ source /etc/profile.d/k8s.sh

?

01-06.安裝依賴包

在每臺(tái)機(jī)器上安裝依賴包:

CentOS:

$ sudo yum install -y epel-release

$ sudo yum install -y conntrack ipvsadm ipset jq sysstat curl iptables libseccomp

?

Ubuntu:

$ sudo apt-get install -y conntrack ipvsadm ipset jq sysstat curl iptables libseccomp

注:ipvs 依賴 ipset;

?

01-07.關(guān)閉防火墻

在每臺(tái)機(jī)器上關(guān)閉防火墻:

① 關(guān)閉服務(wù),并設(shè)為開機(jī)不自啟

$ sudo systemctl stop firewalld

$ sudo systemctl disable firewalld

② 清空防火墻規(guī)則

$ sudo iptables -F && sudo iptables -X && sudo iptables -F -t nat && sudo iptables -X -t nat

$ sudo iptables -P FORWARD ACCEPT

?

01-08.關(guān)閉 swap 分區(qū)

1、如果開啟了 swap 分區(qū),kubelet 會(huì)啟動(dòng)失敗(可以通過將參數(shù) --fail-swap-on 設(shè)置為false 來忽略 swap on),故需要在每臺(tái)機(jī)器上關(guān)閉 swap 分區(qū):

$ sudo swapoff -a

?

2、為了防止開機(jī)自動(dòng)掛載 swap 分區(qū),可以注釋 /etc/fstab 中相應(yīng)的條目:

$ sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

?

01-09.關(guān)閉 SELinux

1、關(guān)閉 SELinux,否則后續(xù) K8S 掛載目錄時(shí)可能報(bào)錯(cuò) Permission denied :

$ sudo setenforce 0

?

2、修改配置文件,永久生效;

$ grep SELINUX /etc/selinux/config

SELINUX=disabled

?

01-10.關(guān)閉 dnsmasq (可選)

linux 系統(tǒng)開啟了 dnsmasq 后(如 GUI 環(huán)境),將系統(tǒng) DNS Server 設(shè)置為 127.0.0.1,這會(huì)導(dǎo)致 docker 容器無法解析域名,需要關(guān)閉它:

$ sudo service dnsmasq stop

$ sudo systemctl disable dnsmasq

?

01-11.加載內(nèi)核模塊

$ sudo modprobe br_netfilter

$ sudo modprobe ip_vs

?

01-12.設(shè)置系統(tǒng)參數(shù)

$ cat > kubernetes.conf <<EOF

net.bridge.bridge-nf-call-iptables=1 net.bridge.bridge-nf-call-ip6tables=1 net.ipv4.ip_forward=1 net.ipv4.tcp_tw_recycle=0 vm.swappiness=0 vm.overcommit_memory=1 vm.panic_on_oom=0 fs.inotify.max_user_watches=89100 fs.file-max=52706963 fs.nr_open=52706963 net.ipv6.conf.all.disable_ipv6=1 net.netfilter.nf_conntrack_max=2310720

EOF

$ sudo cp kubernetes.conf /etc/sysctl.d/kubernetes.conf

$ sudo sysctl -p /etc/sysctl.d/kubernetes.conf

$ sudo mount -t cgroup -o cpu,cpuacct none /sys/fs/cgroup/cpu,cpuacct

?

注:

  • tcp_tw_recycle 和 Kubernetes 的 NAT 沖突,必須關(guān)閉 ,否則會(huì)導(dǎo)致服務(wù)不通;
  • 關(guān)閉不使用的 IPV6 協(xié)議棧,防止觸發(fā) docker BUG;

?

01-13.設(shè)置系統(tǒng)時(shí)區(qū)

1、調(diào)整系統(tǒng) TimeZone

$ sudo timedatectl set-timezone Asia/Shanghai

2、將當(dāng)前的 UTC 時(shí)間寫入硬件時(shí)鐘

$ sudo timedatectl set-local-rtc 0

3、重啟依賴于系統(tǒng)時(shí)間的服務(wù)

$ sudo systemctl restart rsyslog

$ sudo systemctl restart crond

?

01-14.更新系統(tǒng)時(shí)間

$ yum -y install ntpdate

$ sudo ntpdate cn.pool.ntp.org

?

01-15.創(chuàng)建目錄

在每臺(tái)機(jī)器上創(chuàng)建目錄:

$ sudo mkdir -p /opt/k8s/bin

$ sudo mkdir -p /opt/k8s/cert

$ sudo mkdir -p /opt/etcd/cert

$ sudo mkdir -p /opt/lib/etcd

$ sudo mkdir -p /opt/k8s/script

$ chown -R k8s /opt/*

?

01-16.檢查系統(tǒng)內(nèi)核和模塊是否適合運(yùn)行 docker (僅適用于linux 系統(tǒng))

$ curl https://raw.githubusercontent.com/docker/docker/master/contrib/check-config.sh > check-config.sh

$ chmod +x check-config.sh

$ bash ./check-config.sh

?

02.創(chuàng)建 CA 證書和秘鑰

  • 為確保安全, kubernetes 系統(tǒng)各組件需要使用 x509 證書對通信進(jìn)行加密和認(rèn)證。
  • CA (Certificate Authority) 是自簽名的根證書,用來簽名后續(xù)創(chuàng)建的其它證書。

本文檔使用 CloudFlare 的 PKI 工具集 cfssl 創(chuàng)建所有證書。

?

02-01.安裝 cfssl 工具集

mkdir -p /opt/k8s/cert && sudo chown -R k8s /opt/k8s && cd /opt/k8s

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64

mv cfssl_linux-amd64 /opt/k8s/bin/cfssl

wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64

mv cfssljson_linux-amd64 /opt/k8s/bin/cfssljson

wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64

mv cfssl-certinfo_linux-amd64 /opt/k8s/bin/cfssl-certinfo

chmod +x /opt/k8s/bin/*

?

02-02.創(chuàng)建根證書 (CA)

CA 證書是集群所有節(jié)點(diǎn)共享的,只需要?jiǎng)?chuàng)建一個(gè) CA 證書,后續(xù)創(chuàng)建的所有證書都由它簽名。

?

02-02-01 創(chuàng)建配置文件

CA 配置文件用于配置根證書的使用場景 (profile) 和具體參數(shù) (usage,過期時(shí)間、服務(wù)端認(rèn)證、客戶端認(rèn)證、加密等),后續(xù)在簽名其它證書時(shí)需要指定特定場景。

[root@kube-master ~]# cd /opt/k8s/cert

[root@kube-master cert]# vim ca-config.json

{"signing": {"default": {"expiry": "87600h"},"profiles": {"kubernetes": {"usages": ["signing","key encipherment","server auth","client auth"],"expiry": "87600h"}}} }

注:

① signing :表示該證書可用于簽名其它證書,生成的 ca.pem 證書中CA=TRUE ;

② server auth :表示 client 可以用該該證書對 server 提供的證書進(jìn)行驗(yàn)證;

③ client auth :表示 server 可以用該該證書對 client 提供的證書進(jìn)行驗(yàn)證;

?

02-02-02 創(chuàng)建證書簽名請求文件

[root@kube-master cert]# vim ca-csr.json

{"CN": "kubernetes","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "BeiJing","L": "BeiJing","O": "k8s","OU": "4Paradigm"}] }

注:

① CN: Common Name ,kube-apiserver 從證書中提取該字段作為請求的用戶名(User Name),瀏覽器使用該字段驗(yàn)證網(wǎng)站是否合法;

② O: Organization ,kube-apiserver 從證書中提取該字段作為請求用戶所屬的組(Group);

③ kube-apiserver 將提取的 User、Group 作為 RBAC 授權(quán)的用戶標(biāo)識(shí);

?

02-02-03 生成 CA 證書和私鑰

[root@kube-master cert]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca

[root@kube-master cert]# ls

ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem

?

02-02-04 分發(fā)證書文件

將生成的 CA 證書、秘鑰文件、配置文件拷貝到所有節(jié)點(diǎn)的/opt/k8s/cert 目錄下:

[root@kube-master ~]# vim /opt/k8s/script/scp_k8scert.sh

NODE_IPS=("192.168.10.108" "192.168.10.109" "192.168.10.110") for node_ip in ${NODE_IPS[@]};doecho ">>> ${node_ip}"ssh root@${node_ip} "mkdir -p /opt/k8s/cert && chown -R k8s /opt/k8s"scp /opt/k8s/cert/ca*.pem /opt/k8s/cert/ca-config.json k8s@${node_ip}:/opt/k8s/cert done

[root@kube-master ~]# chmod +x /opt/k8s/script/scp_k8scert.sh && /opt/k8s/script/scp_k8scert.sh

?

03.部署 kubectl 命令行工具

  kubectl 是 kubernetes 集群的命令行管理工具,本文檔介紹安裝和配置它的步驟。

  kubectl 默認(rèn)從 ~/.kube/config 文件讀取 kube-apiserver 地址、證書、用戶名等信息,如果沒有配置,執(zhí)行 kubectl 命令時(shí)可能會(huì)出錯(cuò):

$ kubectl get pods

The connection to the server localhost:8080 was refused - did you specify the right host or port?

本文檔只需要部署一次,生成的 kubeconfig 文件與機(jī)器無關(guān)。

?

03-01.下載kubectl 二進(jìn)制文件

下載和解壓

kubectl二進(jìn)制文件需要***下載,我已經(jīng)下載到我的網(wǎng)盤,有需要的小伙伴聯(lián)系我~

[root@kube-master ~]# wget https://dl.k8s.io/v1.10.4/kubernetes-client-linux-amd64.tar.gz

[root@kube-master ~]# tar -xzvf kubernetes-client-linux-amd64.tar.gz

?

03-02.創(chuàng)建 admin 證書和私鑰

  • kubectl 與 apiserver https 安全端口通信,apiserver 對提供的證書進(jìn)行認(rèn)證和授權(quán)。
  • kubectl 作為集群的管理工具,需要被授予最高權(quán)限。這里創(chuàng)建具有最高權(quán)限的admin 證書。

03-02-01 創(chuàng)建證書簽名請求

[root@kube-master ~]# cd /opt/k8s/cert/

cat > admin-csr.json <<EOF

{"CN": "admin","hosts": [],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "BeiJing","L": "BeiJing","O": "system:masters","OU": "4Paradigm"}] }

注:

① O 為 system:masters ,kube-apiserver 收到該證書后將請求的 Group 設(shè)置為system:masters;

② 預(yù)定義的 ClusterRoleBinding cluster-admin 將 Group system:masters 與Role cluster-admin 綁定,該 Role 授予所有 API的權(quán)限;

③ 該證書只會(huì)被 kubectl 當(dāng)做 client 證書使用,所以 hosts 字段為空;

?

03-02-02 生成證書和私鑰

[root@kube-master cert]# cfssl gencert -ca=/opt/k8s/cert/ca.pem \

-ca-key=/opt/k8s/cert/ca-key.pem \

-config=/opt/k8s/cert/ca-config.json \

-profile=kubernetes admin-csr.json | cfssljson_linux-amd64 -bare admin

?

[root@kube-master cert]# ls admin*

admin.csr admin-csr.json admin-key.pem admin.pem

?

03-03.創(chuàng)建和分發(fā) kubeconfig 文件

03-03-01 創(chuàng)建kubeconfig文件

kubeconfig 為 kubectl 的配置文件,包含訪問 apiserver 的所有信息,如 apiserver 地址、CA 證書和自身使用的證書;

① 設(shè)置集群參數(shù),(--server=${KUBE_APISERVER} ,指定IP和端口;我使用的是haproxy的VIP和端口;如果沒有haproxy代理,就用實(shí)際服務(wù)的IP和端口;如:https://192.168.10.108:6443)

[root@kube-master ~]# kubectl config set-cluster kubernetes \

--certificate-authority=/opt/k8s/cert/ca.pem \

--embed-certs=true \

--server=https://192.168.10.10:8443 \

--kubeconfig=/root/.kube/kubectl.kubeconfig

② 設(shè)置客戶端認(rèn)證參數(shù)

[root@kube-master ~]#?kubectl config set-credentials kube-admin \

--client-certificate=/opt/k8s/cert/admin.pem \

--client-key=/opt/k8s/cert/admin-key.pem \

--embed-certs=true \

--kubeconfig=/root/.kube/kubectl.kubeconfig

③ 設(shè)置上下文參數(shù)

[root@kube-master ~]# kubectl config set-context kube-admin@kubernetes \

--cluster=kubernetes \

--user=kube-admin \

--kubeconfig=/root/.kube/kubectl.kubeconfig

④ 設(shè)置默認(rèn)上下文

[root@kube-master ~]# kubectl config use-context kube-admin@kubernetes --kubeconfig=/root/.kube/kubectl.kubeconfig

?

注:在后續(xù)kubernetes認(rèn)證,文章中會(huì)詳細(xì)講解

  • --certificate-authority :驗(yàn)證 kube-apiserver 證書的根證書;
  • --client-certificate 、 --client-key :剛生成的 admin 證書和私鑰,連接 kube-apiserver 時(shí)使用;
  • --embed-certs=true :將 ca.pem 和 admin.pem 證書內(nèi)容嵌入到生成的kubectl.kubeconfig 文件中(不加時(shí),寫入的是證書文件路徑);

[root@kube-master ~]# chmod +x /opt/k8s/script/kubectl_environment.sh && /opt/k8s/script/kubectl_environment.sh

?

03-03-01 驗(yàn)證kubeconfig文件

[root@kube-master ~]# ls /root/.kube/kubectl.kubeconfig

/root/.kube/kubectl.kubeconfig

[root@kube-master ~]# kubectl config view --kubeconfig=/root/.kube/kubectl.kubeconfig

apiVersion: v1 clusters: - cluster:certificate-authority-data: REDACTEDserver: https://192.168.10.10:8443name: kubernetes contexts: - context:cluster: kubernetesuser: kube-adminname: kube-admin@kubernetes current-context: kube-admin@kubernetes kind: Config preferences: {} users: - name: kube-adminuser:client-certificate-data: REDACTEDclient-key-data: REDACTED

?

03-03-03 分發(fā) kubeclt 和kubeconfig 文件,分發(fā)到所有使用kubectl 命令的節(jié)點(diǎn)

[root@kube-master ~]# vim /opt/k8s/script/scp_kubectl.sh

NODE_IPS=("192.168.10.108" "192.168.10.109" "192.168.10.110")

NODE_IPS=("192.168.10.108" "192.168.10.109" "192.168.10.110") for node_ip in ${NODE_IPS[@]};doecho ">>> ${node_ip}"scp /root/kubernetes/client/bin/kubectl k8s@${node_ip}:/opt/k8s/bin/ssh k8s@${node_ip} "chmod +x /opt/k8s/bin/*"ssh k8s@${node_ip} "mkdir -p ~/.kube"scp ~/.kube/config k8s@${node_ip}:~/.kube/configssh root@${node_ip} "mkdir -p ~/.kube"scp ~/.kube/config root@${node_ip}:~/.kube/config done

[root@kube-master ~]# chmod +x /opt/k8s/script/scp_kubectl.sh && /opt/k8s/script/scp_kubectl.sh

?

04.部署 etcd 集群

  etcd 是基于 Raft 的分布式 key-value 存儲(chǔ)系統(tǒng),由 CoreOS 開發(fā),常用于服務(wù)發(fā)現(xiàn)、共享配置以及并發(fā)控制(如 leader 選舉、分布式鎖等)。kubernetes 使用 etcd 存儲(chǔ)所有運(yùn)行數(shù)據(jù)。

本文檔介紹部署一個(gè)三節(jié)點(diǎn)高可用 etcd 集群的步驟:

① 下載和分發(fā) etcd 二進(jìn)制文件

② 創(chuàng)建 etcd 集群各節(jié)點(diǎn)的 x509 證書,用于加密客戶端(如 etcdctl) 與 etcd 集群、etcd 集群之間的數(shù)據(jù)流;

③ 創(chuàng)建 etcd 的 systemd unit 文件,配置服務(wù)參數(shù);

④ 檢查集群工作狀態(tài);

?

04-01.下載etcd 二進(jìn)制文件

到 https://github.com/coreos/etcd/releases 頁面下載最新版本的發(fā)布包:

[root@kube-master ~]# https://github.com/coreos/etcd/releases/download/v3.3.7/etcd-v3.3.7-linux-amd64.tar.gz

[root@kube-master ~]# tar -xvf etcd-v3.3.7-linux-amd64.tar.gz

?

04-02.創(chuàng)建 etcd 證書和私鑰

04-02-01 創(chuàng)建證書簽名請求

[root@kube-master ~]# cd /opt/etcd/cert

[root@kube-master cert]# cat > etcd-csr.json <<EOF

{"CN": "etcd","hosts": ["127.0.0.1","192.168.10.108","192.168.10.109","192.168.10.110"],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "BeiJing","L": "BeiJing","O": "k8s","OU": "4Paradigm"}] }

EOF

注:hosts 字段指定授權(quán)使用該證書的 etcd 節(jié)點(diǎn) IP 或域名列表,這里將 etcd 集群的三個(gè)節(jié)點(diǎn) IP 都列在其中;

?

04-02-02 生成證書和私鑰

[root@kube-master cert]# cfssl gencert -ca=/opt/k8s/cert/ca.pem \

-ca-key=/opt/k8s/cert/ca-key.pem \

-config=/opt/k8s/cert/ca-config.json \

-profile=kubernetes etcd-csr.json | cfssljson_linux-amd64 -bare etcd

?

[root@kube-master cert]# ls etcd*

etcd.csr etcd-csr.json etcd-key.pem etcd.pem

?

04-02-03 分發(fā)生成的證書和私鑰到各 etcd 節(jié)點(diǎn)

[root@kube-master ~]# vim /opt/k8s/script/scp_etcd.sh

NODE_IPS=("192.168.10.108" "192.168.10.109" "192.168.10.110")

for node_ip in ${NODE_IPS[@]};doecho ">>> ${node_ip}"scp /root/etcd-v3.3.7-linux-amd64/etcd* k8s@${node_ip}:/opt/k8s/binssh k8s@${node_ip} "chmod +x /opt/k8s/bin/*"ssh root@${node_ip} "mkdir -p /opt/etcd/cert && chown -R k8s /opt/etcd/cert"scp /opt/etcd/cert/etcd*.pem k8s@${node_ip}:/opt/etcd/cert/ done

?

04-03.創(chuàng)建etcd 的systemd unit 模板及etcd 配置文件

04-03-01 創(chuàng)建etcd 的systemd unit 模板

[root@kube-master ~]# cat > /opt/etcd/etcd.service.template <<EOF

[Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target Documentation=https://github.com/coreos [Service] User=k8s Type=notify WorkingDirectory=/opt/lib/etcd/ ExecStart=/opt/k8s/bin/etcd \--data-dir=/opt/lib/etcd \--name ##NODE_NAME## \--cert-file=/opt/etcd/cert/etcd.pem \--key-file=/opt/etcd/cert/etcd-key.pem \--trusted-ca-file=/opt/k8s/cert/ca.pem \--peer-cert-file=/opt/etcd/cert/etcd.pem \--peer-key-file=/opt/etcd/cert/etcd-key.pem \--peer-trusted-ca-file=/opt/k8s/cert/ca.pem \--peer-client-cert-auth \--client-cert-auth \--listen-peer-urls=https://##NODE_IP##:2380 \--initial-advertise-peer-urls=https://##NODE_IP##:2380 \--listen-client-urls=https://##NODE_IP##:2379,http://127.0.0.1:2379\--advertise-client-urls=https://##NODE_IP##:2379 \--initial-cluster-token=etcd-cluster-0 \--initial-cluster=etcd0=https://192.168.10.108:2380,etcd1=https://192.168.10.109:2380,etcd2=https://192.168.10.110:2380 \--initial-cluster-state=new Restart=on-failure RestartSec=5 LimitNOFILE=65536 [Install] WantedBy=multi-user.target

EOF

注:

  • User :指定以 k8s 賬戶運(yùn)行;
  • WorkingDirectory 、 --data-dir :指定工作目錄和數(shù)據(jù)目錄為/opt/lib/etcd ,需在啟動(dòng)服務(wù)前創(chuàng)建這個(gè)目錄;
  • --name :指定節(jié)點(diǎn)名稱,當(dāng) --initial-cluster-state 值為 new 時(shí), --name 的參數(shù)值必須位于 --initial-cluster 列表中;
  • --cert-file 、 --key-file :etcd server 與 client 通信時(shí)使用的證書和私鑰;
  • --trusted-ca-file :簽名 client 證書的 CA 證書,用于驗(yàn)證 client 證書;
  • --peer-cert-file 、 --peer-key-file :etcd 與 peer 通信使用的證書和私鑰;
  • --peer-trusted-ca-file :簽名 peer 證書的 CA 證書,用于驗(yàn)證 peer 證書;

?

04-04.為各節(jié)點(diǎn)創(chuàng)建和分發(fā) etcd systemd unit 文件

[root@kube-master ~]# cd /opt/k8s/script

[root@kube-master script]# vim etcd_service.sh

NODE_NAMES=("etcd0" "etcd1" "etcd2") NODE_IPS=("192.168.10.108" "192.168.10.109" "192.168.10.110") #替換模板文件中的變量,為各節(jié)點(diǎn)創(chuàng)建 systemd unit 文件 for (( i=0; i < 3; i++ ));dosed -e "s/##NODE_NAME##/${NODE_NAMES[i]}/g" -e "s/##NODE_IP##/${NODE_IPS[i]}/g" /opt/etcd/etcd.service.template > /opt/etcd/etcd-${NODE_IPS[i]}.service done #分發(fā)生成的 systemd unit 和etcd的配置文件: for node_ip in ${NODE_IPS[@]};doecho ">>> ${node_ip}"ssh root@${node_ip} "mkdir -p /opt/lib/etcd && chown -R k8s /opt/lib/etcd"scp /opt/etcd/etcd-${node_ip}.service root@${node_ip}:/etc/systemd/system/etcd.service done

[root@kube-master script]# chmod +x /opt/k8s/script/etcd_service.sh && /opt/k8s/script/etcd_service.sh

[root@kube-master script]# ls /opt/etcd/*.service

/opt/etcd/etcd-192.168.10.108.service /opt/etcd/etcd-192.168.10.110.service

/opt/etcd/etcd-192.168.10.109.service

[root@kube-master script]# ls /etc/systemd/system/etcd.service

/etc/systemd/system/etcd.service

?

04-05.啟動(dòng) etcd 服務(wù)

[root@kube-master script]# vim /opt/k8s/script/etcd.sh

NODE_IPS=("192.168.10.108" "192.168.10.109" "192.168.10.110") #啟動(dòng) etcd 服務(wù) for node_ip in ${NODE_IPS[@]};doecho ">>> ${node_ip}"ssh root@${node_ip} "systemctl daemon-reload && systemctl enable etcd && systemctl start etcd" done #檢查啟動(dòng)結(jié)果,確保狀態(tài)為 active (running) for node_ip in ${NODE_IPS[@]};doecho ">>> ${node_ip}"ssh k8s@${node_ip} "systemctl status etcd|grep Active" done #驗(yàn)證服務(wù)狀態(tài),輸出均為healthy 時(shí)表示集群服務(wù)正常 for node_ip in ${NODE_IPS[@]};doecho ">>> ${node_ip}"ETCDCTL_API=3 /opt/k8s/bin/etcdctl \ --endpoints=https://${node_ip}:2379 \ --cacert=/opt/k8s/cert/ca.pem \ --cert=/opt/etcd/cert/etcd.pem \ --key=/opt/etcd/cert/etcd-key.pem endpoint health done?

[root@kube-master script]# chmod +x etcd.sh && ./etcd.sh

>>> 192.168.10.108

Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /etc/systemd/system/etcd.service.

>>> 192.168.10.109

Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /etc/systemd/system/etcd.service.

>>> 192.168.10.110

Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /etc/systemd/system/etcd.service.

#確保狀態(tài)為 active (running),否則查看日志,確認(rèn)原因:$ journalctl -u etcd

>>> 192.168.10.108

Active: active (running) since Mon 2018-11-26 17:41:00 CST; 12min ago

>>> 192.168.10.109

Active: active (running) since Mon 2018-11-26 17:41:00 CST; 12min ago

>>> 192.168.10.110

Active: active (running) since Mon 2018-11-26 17:41:01 CST; 12min ago

#輸出均為healthy 時(shí)表示集群服務(wù)正常

>>> 192.168.10.108

https://192.168.10.108:2379 is healthy: successfully committed proposal: took = 1.373318ms

>>> 192.168.10.109

https://192.168.10.109:2379 is healthy: successfully committed proposal: took = 2.371807ms

>>> 192.168.10.110

https://192.168.10.110:2379 is healthy: successfully committed proposal: took = 1.764309ms

?

05.部署 flannel 網(wǎng)絡(luò)

  • kubernetes 要求集群內(nèi)各節(jié)點(diǎn)(包括 master 節(jié)點(diǎn))能通過 Pod 網(wǎng)段互聯(lián)互通。flannel 使用 vxlan 技術(shù)為各節(jié)點(diǎn)創(chuàng)建一個(gè)可以互通的 Pod 網(wǎng)絡(luò),使用的端口為 UDP 8472,需要開放該端口(如公有云 AWS 等)。
  • flannel 第一次啟動(dòng)時(shí),從 etcd 獲取 Pod 網(wǎng)段信息,為本節(jié)點(diǎn)分配一個(gè)未使用的 /24段地址,然后創(chuàng)建 flannel.1 (也可能是其它名稱,如 flannel1 等) 接口。
  • flannel 將分配的 Pod 網(wǎng)段信息寫入 /run/flannel/docker 文件,docker 后續(xù)使用這個(gè)文件中的環(huán)境變量設(shè)置 docker0 網(wǎng)橋。

?

05-01.下載flanneld 二進(jìn)制文件

到 https://github.com/coreos/flannel/releases 頁面下載最新版本的發(fā)布包:

[root@kube-master ~]# wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz

[root@kube-master ~]# tar -xzvf flannel-v0.10.0-linux-amd64.tar.gz -C flannel

?

05-02.創(chuàng)建 flannel 證書和私鑰

flannel 從 etcd 集群存取網(wǎng)段分配信息,而 etcd 集群啟用了雙向 x509 證書認(rèn)證,所以需要為 flanneld 生成證書和私鑰。

?

05-02-01 創(chuàng)建證書簽名請求:

[root@kube-master ~]# cd /opt/flannel/cert

cat > flanneld-csr.json <<EOF

{"CN": "flanneld","hosts": [],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "BeiJing","L": "BeiJing","O": "k8s","OU": "4Paradigm"}] }

EOF

該證書只會(huì)被 kubectl 當(dāng)做 client 證書使用,所以 hosts 字段為空;

?

05-02-02 生成證書和私鑰

[root@kube-master cert]# cfssl gencert -ca=/opt/k8s/cert/ca.pem \

-ca-key=/opt/k8s/cert/ca-key.pem \

-config=/opt/k8s/cert/ca-config.json \

-profile=kubernetes flanneld-csr.json | cfssljson -bare flanneld

[root@kube-master cert]# ls

flanneld.csr flanneld-csr.json flanneld-key.pem flanneld.pemls flanneld*pem

?

05-02-03 將flanneld 二進(jìn)制文件he1生成的證書和私鑰分發(fā)到所有節(jié)點(diǎn)

cat > /opt/k8s/script/scp_flannel.sh <<EOF

NODE_IPS=("192.168.10.108" "192.168.10.109" "192.168.10.110") for node_ip in ${NODE_IPS[@]};doecho ">>> ${node_ip}"scp /root/flannel/{flanneld,mk-docker-opts.sh} k8s@${node_ip}:/opt/k8s/bin/ssh k8s@${node_ip} "chmod +x /opt/k8s/bin/*"ssh root@${node_ip} "mkdir -p /opt/flannel/cert && chown -R k8s /opt/flannel"scp /opt/flannel/cert/flanneld*.pem k8s@${node_ip}:/opt/flannel/cert done

EOF

?

05-03.向etcd 寫入集群Pod 網(wǎng)段信息

注意:本步驟只需執(zhí)行一次。

[root@kube-master ~]# etcdctl \

--endpoints="https://192.168.10.108:2379,https://192.168.10.109:2379,https://192.168.10.110:2379" \

--ca-file=/opt/k8s/cert/ca.pem \

--cert-file=/opt/flannel/cert/flanneld.pem \

--key-file=/opt/flannel/cert/flanneld-key.pem \

set /atomic.io/network/config '{"Network":"10.30.0.0/16","SubnetLen": 24, "Backend": {"Type": "vxlan"}}'

{"Network":"10.30.0.0/16","SubnetLen": 24, "Backend": {"Type": "vxlan"}}

注:

  • flanneld 當(dāng)前版本 (v0.10.0) 不支持 etcd v3,故使用 etcd v2 API 寫入配置 key 和網(wǎng)段數(shù)據(jù);
  • 寫入的 Pod 網(wǎng)段 "Network" 必須是 /16 段地址,必須與kube-controller-manager 的 --cluster-cidr 參數(shù)值一致;

?

05-04.創(chuàng)建 flanneld 的 systemd unit 文件

[root@kube-master ~]# cat > /opt/flannel/flanneld.service << EOF

[Unit] Description=Flanneld overlay address etcd agent After=network.target After=network-online.target Wants=network-online.target After=etcd.service Before=docker.service[Service] Type=notify ExecStart=/opt/k8s/bin/flanneld \ -etcd-cafile=/opt/k8s/cert/ca.pem \ -etcd-certfile=/opt/flannel/cert/flanneld.pem \ -etcd-keyfile=/opt/flannel/cert/flanneld-key.pem \ -etcd-endpoints=https://192.168.10.108:2379,https://192.168.10.109:2379,https://192.168.10.110:2379 \ -etcd-prefix=/atomic.io/network \ -iface=eth1 ExecStartPost=/opt/k8s/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker Restart=on-failure[Install] WantedBy=multi-user.target RequiredBy=docker.service

?注:

  • mk-docker-opts.sh 腳本將分配給 flanneld 的 Pod 子網(wǎng)網(wǎng)段信息寫入/run/flannel/docker 文件,后續(xù) docker 啟動(dòng)時(shí)使用這個(gè)文件中的環(huán)境變量配置 docker0 網(wǎng)橋;
  • flanneld 使用系統(tǒng)缺省路由所在的接口與其它節(jié)點(diǎn)通信,對于有多個(gè)網(wǎng)絡(luò)接口(如內(nèi)網(wǎng)和公網(wǎng))的節(jié)點(diǎn),可以用 -iface 參數(shù)指定通信接口,如上面的 eth1 接口;
  • flanneld 運(yùn)行時(shí)需要 root 權(quán)限;

?

05-05.分發(fā)flanneld systemd unit 文件到所有節(jié)點(diǎn),啟動(dòng)并檢查flanneld 服務(wù)

[root@kube-master ~]# vim /opt/k8s/script/flanneld_service.sh

NODE_IPS=("192.168.10.108" "192.168.10.109" "192.168.10.110") for node_ip in ${NODE_IPS[@]};doecho ">>> ${node_ip}"#分發(fā) flanneld systemd unit 文件到所有節(jié)點(diǎn)scp /opt/flannel/flanneld.service root@${node_ip}:/etc/systemd/system/#啟動(dòng) flanneld 服務(wù)ssh root@${node_ip} "systemctl daemon-reload && systemctl enable flanneld && systemctl restart flanneld"#檢查啟動(dòng)結(jié)果ssh k8s@${node_ip} "systemctl status flanneld|grep Active" done

?[root@kube-master ~]# chmod +x /opt/k8s/script/flanneld_service.sh && /opt/k8s/script/flanneld_service.sh

注:確保狀態(tài)為 active (running) ,否則查看日志,確認(rèn)原因:

$ journalctl -u flanneld

?

05-06.檢查分配給各 flanneld 的 Pod 網(wǎng)段信息

05-06-01 查看集群 Pod 網(wǎng)段(/16)

[root@kube-master ~]# etcdctl \

--endpoints="https://192.168.10.108:2379,https://192.168.10.109:2379,https://192.168.10.110:2379" \

--ca-file=/opt/k8s/cert/ca.pem \

--cert-file=/opt/flannel/cert/flanneld.pem \

--key-file=/opt/flannel/cert/flanneld-key.pem \

get /atomic.io/network/config

?

輸出:

{"Network":"10.30.0.0/16","SubnetLen": 24, "Backend": {"Type": "vxlan"}}

?

05-06-02 查看已分配的 Pod 子網(wǎng)段列表(/24)

[root@kube-master ~]# etcdctl \

--endpoints="https://192.168.10.108:2379,https://192.168.10.109:2379,https://192.168.10.110:2379" \

--ca-file=/opt/k8s/cert/ca.pem \

--cert-file=/opt/flannel/cert/flanneld.pem \

--key-file=/opt/flannel/cert/flanneld-key.pem \

ls /atomic.io/network/subnets

?

輸出:

/atomic.io/network/subnets/10.30.22.0-24

/atomic.io/network/subnets/10.30.33.0-24

/atomic.io/network/subnets/10.30.44.0-24

?

05-06-03 查看某一 Pod 網(wǎng)段對應(yīng)的節(jié)點(diǎn) IP 和 flannel 接口地址

[root@kube-master ~]# etcdctl \

--endpoints="https://192.168.10.108:2379,https://192.168.10.109:2379,https://192.168.10.110:2379" \

--ca-file=/opt/k8s/cert/ca.pem \

--cert-file=/opt/flannel/cert/flanneld.pem \

--key-file=/opt/flannel/cert/flanneld-key.pem \

get /atomic.io/network/subnets/10.30.22.0-24

?

輸出:

{"PublicIP":"192.168.10.108","BackendType":"vxlan","BackendData":{"VtepMAC":"fe:20:82:76:fc:25"}}

?

05-06-04 驗(yàn)證各節(jié)點(diǎn)能通過 Pod 網(wǎng)段互通

[root@kube-master ~]# vim /opt/k8s/script/ping_flanneld.sh

NODE_IPS=("192.168.10.108" "192.168.10.109" "192.168.10.110") for node_ip in ${NODE_IPS[@]};doecho ">>> ${node_ip}"#在各節(jié)點(diǎn)上部署 flannel 后,檢查是否創(chuàng)建了 flannel 接口(名稱可能為 flannel0、flannel.0、flannel.1 等)ssh ${node_ip} "/usr/sbin/ip addr show flannel.1|grep -w inet"#在各節(jié)點(diǎn)上 ping 所有 flannel 接口 IP,確保能通ssh ${node_ip} "ping -c 1 10.30.22.0"ssh ${node_ip} "ping -c 1 10.30.33.0"ssh ${node_ip} "ping -c 1 10.30.44.0" done

[root@kube-master ~]# chmod +x /opt/k8s/script/ping_flanneld.sh && /opt/k8s/script/ping_flanneld.sh

?

06.部署 master 節(jié)點(diǎn)

① kubernetes master 節(jié)點(diǎn)運(yùn)行如下組件:

  • kube-apiserver
  • kube-scheduler
  • kube-controller-manager

② kube-scheduler 和 kube-controller-manager 可以以集群模式運(yùn)行,通過 leader 選舉產(chǎn)生一個(gè)工作進(jìn)程,其它進(jìn)程處于阻塞模式。

③ 對于 kube-apiserver,可以運(yùn)行多個(gè)實(shí)例(本文檔是 3 實(shí)例),但對其它組件需要提供統(tǒng)一的訪問地址,該地址需要高可用。本文檔使用 keepalived 和 haproxy 實(shí)現(xiàn) kube-apiserver VIP 高可用和負(fù)載均衡。

④ 因?yàn)閷aster做了keepalived高可用,所以3臺(tái)服務(wù)器都有可能會(huì)升成master服務(wù)器(主master宕機(jī),會(huì)有從升級(jí)為主);因此所有的master操作,在3個(gè)服務(wù)器上都要進(jìn)行。

?

1、下載最新版本的二進(jìn)制文件

從CHANGELOG?頁面 下載 server tarball 文件。這2個(gè)包下載也需要***。

[root@kube-master ~]# wget https://dl.k8s.io/v1.10.4/kubernetes-server-linux-amd64.tar.gz

[root@kube-master ~]# tar -xzvf kubernetes-server-linux-amd64.tar.gz

[root@kube-master ~]# cd kubernetes/

[root@kube-master kubernetes]# tar -xzvf kubernetes-src.tar.gz

?

2、將二進(jìn)制文件拷貝到所有 master 節(jié)點(diǎn)

[root@kube-master ~]# vim /opt/k8s/script/scp_master.sh

NODE_IPS=("192.168.10.108" "192.168.10.109" "192.168.10.110") for node_ip in ${NODE_IPS[@]};doecho ">>> ${node_ip}"scp /root/kubernetes/server/bin/* k8s@${node_ip}:/opt/k8s/bin/ssh k8s@${node_ip} "chmod +x /opt/k8s/bin/*" done

[root@kube-master ~]# chmod +x /opt/k8s/script/scp_master.sh && /opt/k8s/script/scp_master.sh

?

06-01.部署高可用組件

① 本文檔講解使用 keepalived 和 haproxy 實(shí)現(xiàn) kube-apiserver 高可用的步驟:

  • keepalived 提供 kube-apiserver 對外服務(wù)的 VIP;
  • haproxy 監(jiān)聽 VIP,后端連接所有 kube-apiserver 實(shí)例,提供健康檢查和負(fù)載均衡功能;

② 運(yùn)行 keepalived 和 haproxy 的節(jié)點(diǎn)稱為 LB 節(jié)點(diǎn)。由于 keepalived 是一主多備運(yùn)行模式,故至少兩個(gè) LB 節(jié)點(diǎn)。

③ 本文檔復(fù)用 master 節(jié)點(diǎn)的三臺(tái)機(jī)器,haproxy 監(jiān)聽的端口(8443) 需要與 kube-apiserver的端口 6443 不同,避免沖突。

④ keepalived 在運(yùn)行過程中周期檢查本機(jī)的 haproxy 進(jìn)程狀態(tài),如果檢測到 haproxy 進(jìn)程異常,則觸發(fā)重新選主的過程,VIP 將飄移到新選出來的主節(jié)點(diǎn),從而實(shí)現(xiàn) VIP 的高可用。

⑤ 所有組件(如 kubeclt、apiserver、controller-manager、scheduler 等)都通過 VIP 和haproxy 監(jiān)聽的 8443 端口訪問 kube-apiserver 服務(wù)。

?

06-01-01 安裝軟件包,配置haproxy 配置文件

[root@kube-master ~]# yum install -y keepalived haproxy

[root@kube-master ~]# vim /etc/haproxy/haproxy.cfg

[root@kube-master ~]# cat /etc/haproxy/haproxy.cfg

globallog /dev/log local0log /dev/log local1 noticechroot /var/lib/haproxystats socket /var/run/haproxy-admin.sock mode 660 level adminstats timeout 30suser haproxygroup haproxydaemonnbproc 1 defaultslog globaltimeout connect 5000timeout client 10mtimeout server 10m listen admin_statsbind 0.0.0.0:10080mode httplog 127.0.0.1 local0 errstats refresh 30sstats uri /statusstats realm welcome login\ Haproxystats auth along:along123stats hide-versionstats admin if TRUE listen kube-masterbind 0.0.0.0:8443mode tcpoption tcplogbalance sourceserver 192.168.10.108 192.168.10.108:6443 check inter 2000 fall 2 rise 2 weight 1server 192.168.10.109 192.168.10.109:6443 check inter 2000 fall 2 rise 2 weight 1server 192.168.10.110 192.168.10.110:6443 check inter 2000 fall 2 rise 2 weight 1

注:

  • haproxy 在 10080 端口輸出 status 信息;
  • haproxy 監(jiān)聽所有接口的 8443 端口,該端口與環(huán)境變量 ${KUBE_APISERVER} 指定的端口必須一致;
  • server 字段列出所有kube-apiserver監(jiān)聽的 IP 和端口;

?

06-01-02 在其他服務(wù)器安裝、下發(fā)haproxy 配置文件;并啟動(dòng)檢查haproxy服務(wù)

[root@kube-master ~]# vim /opt/k8s/script/haproxy.sh

NODE_IPS=("192.168.10.108" "192.168.10.109" "192.168.10.110") for node_ip in ${NODE_IPS[@]};doecho ">>> ${node_ip}"#安裝haproxyssh root@${node_ip} "yum install -y keepalived haproxy"#下發(fā)配置文件scp /etc/haproxy/haproxy.cfg root@${node_ip}:/etc/haproxy#啟動(dòng)檢查haproxy服務(wù)ssh root@${node_ip} "systemctl restart haproxy"ssh root@${node_ip} "systemctl enable haproxy.service"ssh root@${node_ip} "systemctl status haproxy|grep Active"#檢查 haproxy 是否監(jiān)聽6443 端口ssh root@${node_ip} "netstat -lnpt|grep haproxy" done

[root@kube-master ~]# chmod +x /opt/k8s/script/haproxy.sh && /opt/k8s/script/haproxy.sh

確保輸出類似于:

tcp 0 0 0.0.0.0:8443 0.0.0.0:* LISTEN 5351/haproxy

tcp 0 0 0.0.0.0:10080 0.0.0.0:* LISTEN 5351/haproxy

?

06-01-03 配置和啟動(dòng) keepalived 服務(wù)

keepalived 是一主(master)多備(backup)運(yùn)行模式,故有兩種類型的配置文件。

master 配置文件只有一份,backup 配置文件視節(jié)點(diǎn)數(shù)目而定,對于本文檔而言,規(guī)劃如下:

  • master: 192.168.10.108
  • backup:192.168.10.109、192.168.10.110

?

(1)在192.168.10.108 master服務(wù);配置文件:

[root@kube-master ~]# vim /etc/keepalived/keepalived.conf

global_defs {router_id keepalived_hap } vrrp_script check-haproxy {script "killall -0 haproxy"interval 5weight -30 } vrrp_instance VI-kube-master {state MASTERpriority 120dont_track_primaryinterface eth1virtual_router_id 68advert_int 3track_script {check-haproxy}virtual_ipaddress {192.168.10.10} }

注:

  • 我的VIP 所在的接口nterface 為 eth1;根據(jù)自己的情況改變
  • 使用 killall -0 haproxy 命令檢查所在節(jié)點(diǎn)的 haproxy 進(jìn)程是否正常。如果異常則將權(quán)重減少(-30),從而觸發(fā)重新選主過程;
  • router_id、virtual_router_id 用于標(biāo)識(shí)屬于該 HA 的 keepalived 實(shí)例,如果有多套keepalived HA,則必須各不相同;

?

(2)在兩臺(tái)backup 服務(wù);配置文件:

[root@kube-node1 ~]# vim /etc/keepalived/keepalived.conf

global_defs {router_id keepalived_hap } vrrp_script check-haproxy {script "killall -0 haproxy"interval 5weight -30 } vrrp_instance VI-kube-master {state BACKUPpriority 110 #第2臺(tái)從為100dont_track_primaryinterface eth1virtual_router_id 68advert_int 3track_script {check-haproxy}virtual_ipaddress {192.168.10.10} }

?注:

  • 我的VIP 所在的接口nterface 為 eth1;根據(jù)自己的情況改變
  • 使用 killall -0 haproxy 命令檢查所在節(jié)點(diǎn)的 haproxy 進(jìn)程是否正常。如果異常則將權(quán)重減少(-30),從而觸發(fā)重新選主過程;
  • router_id、virtual_router_id 用于標(biāo)識(shí)屬于該 HA 的 keepalived 實(shí)例,如果有多套keepalived HA,則必須各不相同;
  • priority 的值必須小于 master 的值;兩個(gè)從的值也需要不一樣;

?

(3)開啟keepalived 服務(wù)

[root@kube-master ~]# vim /opt/k8s/script/keepalived.sh

NODE_IPS=("192.168.10.108" "192.168.10.109" "192.168.10.110") VIP="192.168.10.10" for node_ip in ${NODE_IPS[@]};doecho ">>> ${node_ip}"ssh root@${node_ip} "systemctl restart keepalived && systemctl enable keepalived"ssh root@${node_ip} "systemctl status keepalived|grep Active"ssh ${node_ip} "ping -c 1 ${VIP}" done

[root@kube-master ~]# chmod +x /opt/k8s/script/keepalived.sh && /opt/k8s/script/keepalived.sh

?

(4)在master服務(wù)器上能看到eth1網(wǎng)卡上已經(jīng)有192.168.10.10 VIP了

[root@kube-master ~]# ip a show eth1

3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000

????link/ether 00:50:56:22:1b:39 brd ff:ff:ff:ff:ff:ff

????inet 192.168.10.108/24 brd 192.168.10.255 scope global eth1

???????valid_lft forever preferred_lft forever

????inet 192.168.10.10/32 scope global eth1

???????valid_lft forever preferred_lft forever

?

06-01-04 查看 haproxy 狀態(tài)頁面

瀏覽器訪問192.168.10.10:10080/status 地址

① 輸入用戶名、密碼;在配置文件中自己定義的

② 查看 haproxy 狀態(tài)頁面

?

06-02.部署 kube-apiserver 組件

本文檔講解使用 keepalived 和 haproxy 部署一個(gè) 3 節(jié)點(diǎn)高可用 master 集群的步驟,對應(yīng)的 LB VIP 為環(huán)境變量 ${MASTER_VIP}。

準(zhǔn)備工作:下載最新版本的二進(jìn)制文件、安裝和配置 flanneld

?

06-02-01 創(chuàng)建 kubernetes 證書和私鑰

(1)創(chuàng)建證書簽名請求:

[root@kube-master ~]# cd /opt/k8s/cert/

[root@kube-master cert]# cat > kubernetes-csr.json <<EOF

{"CN": "kubernetes","hosts": ["127.0.0.1","192.168.10.108","192.168.10.109","192.168.10.110","192.168.10.10","10.96.0.1","kubernetes","kubernetes.default","kubernetes.default.svc","kubernetes.default.svc.cluster","kubernetes.default.svc.cluster.local"],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "BeiJing","L": "BeiJing","O": "k8s","OU": "4Paradigm"}] }

EOF

注:

  • hosts 字段指定授權(quán)使用該證書的 IP 或域名列表,這里列出了 VIP 、apiserver節(jié)點(diǎn) IP、kubernetes 服務(wù) IP 和域名;
  • 域名最后字符不能是 . (如不能為kubernetes.default.svc.cluster.local. ),否則解析時(shí)失敗,提示: x509:cannot parse dnsName "kubernetes.default.svc.cluster.local." ;
  • 如果使用非 cluster.local 域名,如 opsnull.com ,則需要修改域名列表中的最后兩個(gè)域名為: kubernetes.default.svc.opsnull 、 kubernetes.default.svc.opsnull.com
  • kubernetes 服務(wù) IP 是 apiserver 自動(dòng)創(chuàng)建的,一般是 --service-cluster-ip-range 參數(shù)指定的網(wǎng)段的第一個(gè)IP,后續(xù)可以通過如下命令獲取:

[root@kube-master ~]# kubectl get svc kubernetes

NAME? ? ? ? ? TYPE? ? ? ? ?CLUSTER-IP ??EXTERNAL-IP ??PORT(S) ??AGE

kubernetes ??ClusterIP ??10.96.0.1? ? ? ? ? <none>? ? ? ? ? ? ? 443/TCP ??4d

?

(2)生成證書和私鑰

[root@kube-master cert]# cfssl gencert -ca=/opt/k8s/cert/ca.pem \

-ca-key=/opt/k8s/cert/ca-key.pem \

-config=/opt/k8s/cert/ca-config.json \

-profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes

?

[root@kube-master cert]# ls kubernetes*

kubernetes.csr kubernetes-csr.json kubernetes-key.pem kubernetes.pem

?

06-02-02 創(chuàng)建加密配置文件

① 產(chǎn)生一個(gè)用來加密Etcd 的 Key:

[root@kube-master ~]# head -c 32 /dev/urandom | base64

uS+YQXYoi1nxvI1pfSc2wRt64h/Iu5/4GxCuSvN+/jI=

注意:每臺(tái)master節(jié)點(diǎn)需要用一樣的 Key

?

② 使用這個(gè)加密的key,創(chuàng)建加密配置文件

[root@kube-master cert]# vim encryption-config.yaml

kind: EncryptionConfig apiVersion: v1 resources:- resources:- secretsproviders:- aescbc:keys:- name: key1secret: uS+YQXYoi1nxvI1pfSc2wRt64h/Iu5/4GxCuSvN+/jI=- identity: {}

?

06-02-03 將生成的證書和私鑰文件、加密配置文件拷貝到master 節(jié)點(diǎn)的/opt/k8s目錄下

[root@kube-master cert]# vim /opt/k8s/script/scp_apiserver.sh

NODE_IPS=("192.168.10.108" "192.168.10.109" "192.168.10.110") for node_ip in ${NODE_IPS[@]};doecho ">>> ${node_ip}"ssh root@${node_ip} "mkdir -p /opt/k8s/cert/ && sudo chown -R k8s /opt/k8s/cert/"scp /opt/k8s/cert/kubernetes*.pem k8s@${node_ip}:/opt/k8s/cert/scp /opt/k8s/cert/encryption-config.yaml root@${node_ip}:/opt/k8s/ done?

[root@kube-master cert]# chmod +x /opt/k8s/script/scp_apiserver.sh && /opt/k8s/script/scp_apiserver.sh

?

06-02-04 創(chuàng)建 kube-apiserver systemd unit 模板文件

cat > /opt/apiserver/kube-apiserver.service.template <<EOF

[Unit] Description=Kubernetes API Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=network.target[Service] ExecStart=/opt/k8s/bin/kube-apiserver \ --enable-admission-plugins=Initializers,NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \ --anonymous-auth=false \ --experimental-encryption-provider-config=/opt/k8s/encryption-config.yaml \ --advertise-address=##NODE_IP## \ --bind-address=##NODE_IP## \ --insecure-port=0 \ --authorization-mode=Node,RBAC \ --runtime-config=api/all \ --enable-bootstrap-token-auth \ --service-cluster-ip-range=10.96.0.0/16 \ --service-node-port-range=1-32767 \ --tls-cert-file=/opt/k8s/cert/kubernetes.pem \ --tls-private-key-file=/opt/k8s/cert/kubernetes-key.pem \ --client-ca-file=/opt/k8s/cert/ca.pem \ --kubelet-client-certificate=/opt/k8s/cert/kubernetes.pem \ --kubelet-client-key=/opt/k8s/cert/kubernetes-key.pem \ --service-account-key-file=/opt/k8s/cert/ca-key.pem \ --etcd-cafile=/opt/k8s/cert/ca.pem \ --etcd-certfile=/opt/k8s/cert/kubernetes.pem \ --etcd-keyfile=/opt/k8s/cert/kubernetes-key.pem \ --etcd-servers=https://192.168.10.108:2379,https://192.168.10.109:2379,https://192.168.10.110:2379 \ --enable-swagger-ui=true \ --allow-privileged=true \ --apiserver-count=3 \ --audit-log-maxage=30 \ --audit-log-maxbackup=3 \ --audit-log-maxsize=100 \ --audit-log-path=/var/log/kube-apiserver-audit.log \ --event-ttl=1h \ --alsologtostderr=true \ --logtostderr=false \ --log-dir=/opt/log/kubernetes \ --v=2 Restart=on-failure RestartSec=5 Type=notify User=k8s LimitNOFILE=65536[Install] WantedBy=multi-user.target

EOF

注:

  • --experimental-encryption-provider-config :啟用加密特性;
  • --authorization-mode=Node,RBAC : 開啟 Node 和 RBAC 授權(quán)模式,拒絕未授權(quán)的請求;
  • --enable-admission-plugins :啟用 ServiceAccount 和NodeRestriction ;
  • --service-account-key-file :簽名 ServiceAccount Token 的公鑰文件,kube-controller-manager 的 --service-account-private-key-file 指定私鑰文件,兩者配對使用;
  • --tls-*-file :指定 apiserver 使用的證書、私鑰和 CA 文件。 --client-ca-file 用于驗(yàn)證 client (kue-controller-manager、kube-scheduler、kubelet、kube-proxy 等)請求所帶的證書;
  • --kubelet-client-certificate 、 --kubelet-client-key :如果指定,則使用 https 訪問 kubelet APIs;需要為證書對應(yīng)的用戶(上面 kubernetes*.pem 證書的用戶為 kubernetes) 用戶定義 RBAC 規(guī)則,否則訪問 kubelet API 時(shí)提示未授權(quán);
  • --bind-address : 不能為 127.0.0.1 ,否則外界不能訪問它的安全端口6443;
  • --insecure-port=0 :關(guān)閉監(jiān)聽非安全端口(8080);
  • --service-cluster-ip-range : 指定 Service Cluster IP 地址段;
  • --service-node-port-range : 指定 NodePort 的端口范圍;
  • --runtime-config=api/all=true : 啟用所有版本的 APIs,如autoscaling/v2alpha1;
  • --enable-bootstrap-token-auth :啟用 kubelet bootstrap 的 token 認(rèn)證;
  • --apiserver-count=3 :指定集群運(yùn)行模式,多臺(tái) kube-apiserver 會(huì)通過 leader選舉產(chǎn)生一個(gè)工作節(jié)點(diǎn),其它節(jié)點(diǎn)處于阻塞狀態(tài);
  • User=k8s :使用 k8s 賬戶運(yùn)行;

?

06-02-05 為各節(jié)點(diǎn)創(chuàng)建和分發(fā) kube-apiserver systemd unit文件;啟動(dòng)檢查 kube-apiserver 服務(wù)

[root@kube-master ~]# vim /opt/k8s/script/apiserver_service.sh

NODE_IPS=("192.168.10.108" "192.168.10.109" "192.168.10.110") #替換模板文件中的變量,為各節(jié)點(diǎn)創(chuàng)建 systemd unit 文件 for (( i=0; i < 3; i++ ));dosed "s/##NODE_IP##/${NODE_IPS[i]}/" /opt/apiserver/kube-apiserver.service.template > /opt/apiserver/kube-apiserver-${NODE_IPS[i]}.service done #啟動(dòng)并檢查 kube-apiserver 服務(wù) for node_ip in ${NODE_IPS[@]};doecho ">>> ${node_ip}"ssh root@${node_ip} "mkdir -p /opt/log/kubernetes && chown -R k8s /opt/log/kubernetes"scp /opt/apiserver/kube-apiserver-${node_ip}.service root@${node_ip}:/etc/systemd/system/kube-apiserver.servicessh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-apiserver && systemctl restart kube-apiserver"ssh root@${node_ip} "systemctl status kube-apiserver |grep 'Active:'" done

[root@kube-master ~]# chmod +x /opt/k8s/script/apiserver_service.sh && /opt/k8s/script/apiserver_service.sh

確保狀態(tài)為 active (running) ,否則到 master 節(jié)點(diǎn)查看日志,確認(rèn)原因:

journalctl -u kube-apiserver

?

06-02-06 打印 kube-apiserver 寫入 etcd 的數(shù)據(jù)

[root@kube-master ~]# ETCDCTL_API=3 etcdctl \

--endpoints="https://192.168.10.108:2379,https://192.168.10.109:2379,https://192.168.10.110:2379" \

--cacert=/opt/k8s/cert/ca.pem \

--cert=/opt/etcd/cert/etcd.pem \

--key=/opt/etcd/cert/etcd-key.pem \

get /registry/ --prefix --keys-only

?

06-02-07 檢查集群信息

[root@kube-master ~]# kubectl cluster-info

Kubernetes master is running at https://192.168.10.108:6443

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

?

[root@kube-master ~]# kubectl get all --all-namespaces

NAMESPACE  NAME       TYPE   CLUSTER-IP  EXTERNAL-IP? PORT(S)? AGE

default     service/kubernetes? ? ClusterIP? 10.96.0.1    <none>    ? 443/TCP? 16h

?

[root@kube-master ~]# kubectl get componentstatuses

NAME    ? ?STATUS? ? ? ? ? MESSAGE ERROR

scheduler   ?Unhealthy? ? ? ? Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: getsockopt: connection refused

controller-manager Unhealthy? Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: getsockopt: connection refused

etcd-1     Healthy {"health":"true"}

etcd-2     Healthy {"health":"true"}

etcd-0     Healthy {"health":"true"}

?

注意:

① 如果執(zhí)行 kubectl 命令式時(shí)輸出如下錯(cuò)誤信息,則說明使用的 ~/.kube/config文件不對,請切換到正確的賬戶后再執(zhí)行該命令:

The connection to the server localhost:8080 was refused - did you specify the right host or port?

② 執(zhí)行 kubectl get componentstatuses 命令時(shí),apiserver 默認(rèn)向 127.0.0.1 發(fā)送請求。當(dāng)controller-manager、scheduler 以集群模式運(yùn)行時(shí),有可能和 kube-apiserver 不在一臺(tái)機(jī)器上,這時(shí) controller-manager 或 scheduler 的狀態(tài)為Unhealthy,但實(shí)際上它們工作正常。

?

06-02-08 檢查 kube-apiserver 監(jiān)聽的端口

[root@kube-master ~]# ss -nutlp |grep apiserver

tcp? ?LISTEN? ?0? ?128? ?192.168.10.108:6443? ?*:*? ?users:(("kubeapiserver",pid=929,fd=5))

  • 6443: 接收 https 請求的安全端口,對所有請求做認(rèn)證和授權(quán);
  • 由于關(guān)閉了非安全端口,故沒有監(jiān)聽 8080;

?

06-02-09 授予 kubernetes 證書訪問 kubelet API 的權(quán)限

在執(zhí)行 kubectl exec、run、logs 等命令時(shí),apiserver 會(huì)轉(zhuǎn)發(fā)到 kubelet。這里定義RBAC 規(guī)則,授權(quán) apiserver 調(diào)用 kubelet API。

[root@kube-master ~]# kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes

clusterrolebinding.rbac.authorization.k8s.io "kube-apiserver:kubelet-apis" created

?

06-03.部署高可用kube-controller-manager 集群

  本文檔介紹部署高可用 kube-controller-manager 集群的步驟。

  該集群包含 3 個(gè)節(jié)點(diǎn),啟動(dòng)后將通過競爭選舉機(jī)制產(chǎn)生一個(gè) leader 節(jié)點(diǎn),其它節(jié)點(diǎn)為阻塞狀態(tài)。當(dāng) leader 節(jié)點(diǎn)不可用后,剩余節(jié)點(diǎn)將再次進(jìn)行選舉產(chǎn)生新的 leader 節(jié)點(diǎn),從而保證服務(wù)的可用性。

  為保證通信安全,本文檔先生成 x509 證書和私鑰,kube-controller-manager 在如下兩種情況下使用該證書:

① 與 kube-apiserver 的安全端口通信時(shí);

② 在安全端口(https,10252) 輸出 prometheus 格式的 metrics;

準(zhǔn)備工作:下載最新版本的二進(jìn)制文件、安裝和配置 flanneld

?

06-03-01 創(chuàng)建 kube-controller-manager 證書和私鑰

創(chuàng)建證書簽名請求:

[root@kube-master ~]# cd /opt/k8s/cert/

[root@kube-master cert]# cat > kube-controller-manager-csr.json <<EOF

{"CN": "system:kube-controller-manager","key": {"algo": "rsa","size": 2048},"hosts": ["127.0.0.1","192.168.10.108","192.168.10.109","192.168.10.110"],"names": [{"C": "CN","ST": "BeiJing","L": "BeiJing","O": "system:kube-controller-manager","OU": "4Paradigm"}] }

EOF

注:

  • hosts 列表包含所有 kube-controller-manager 節(jié)點(diǎn) IP;
  • CN 為 system:kube-controller-manager、O 為 system:kube-controller-manager,kubernetes 內(nèi)置的 ClusterRoleBindings system:kube-controller-manager 賦予kube-controller-manager 工作所需的權(quán)限。

?

06-03-02 生成證書和私鑰

[root@kube-master cert]# cfssl gencert -ca=/opt/k8s/cert/ca.pem \

-ca-key=/opt/k8s/cert/ca-key.pem \

-config=/opt/k8s/cert/ca-config.json \

-profile=kubernetes kube-controller-manager-csr.json | cfssljson_linux-amd64 -bare kube-controller-manager

?

[root@kube-master cert]# ls *controller-manager*

kube-controller-manager.csr   ? kube-controller-manager-key.pem

kube-controller-manager-csr.json? ?kube-controller-manager.pem

?

06-03-03 創(chuàng)建kubeconfig 文件

kubeconfig 文件包含訪問 apiserver 的所有信息,如 apiserver 地址、CA 證書和自身使用的證書;

① 執(zhí)行命令,生產(chǎn)kube-controller-manager.kubeconfig文件

[root@kube-master ~]# kubectl config set-cluster kubernetes \

--certificate-authority=/opt/k8s/cert/ca.pem \

--embed-certs=true \

--server=https://192.168.10.10:8443 \

--kubeconfig=/root/.kube/kube-controller-manager.kubeconfig

?

[root@kube-master ~]# kubectl config set-credentials system:kube-controller-manager \

--client-certificate=/opt/k8s/cert/kube-controller-manager.pem \

--client-key=/opt/k8s/cert/kube-controller-manager-key.pem \

--embed-certs=true \

--kubeconfig=/root/.kube/kube-controller-manager.kubeconfig

?

[root@kube-master ~]# kubectl config set-context system:kube-controller-manager@kubernetes \

--cluster=kubernetes \

--user=system:kube-controller-manager \

--kubeconfig=/root/.kube/kube-controller-manager.kubeconfig

?

[root@kube-master ~]# kubectl config use-context system:kube-controller-manager@kubernetes --kubeconfig=/root/.kube/kube-controller-manager.kubeconfig

?

② 驗(yàn)證kube-controller-manager.kubeconfig文件

[root@kube-master cert]# ls /root/.kube/kube-controller-manager.kubeconfig

/root/.kube/kube-controller-manager.kubeconfig

[root@kube-master ~]# kubectl config view --kubeconfig=/root/.kube/kube-controller-manager.kubeconfig

apiVersion: v1 clusters: - cluster:certificate-authority-data: REDACTEDserver: https://192.168.10.10:8443name: kubernetes contexts: - context:cluster: kubernetesuser: system:kube-controller-managername: system:kube-controller-manager@kubernetes current-context: system:kube-controller-manager@kubernetes kind: Config preferences: {} users: - name: system:kube-controller-manageruser:client-certificate-data: REDACTEDclient-key-data: REDACTED

?

06-03-04 分發(fā)生成的證書和私鑰、kubeconfig 到所有 master 節(jié)點(diǎn)

[root@kube-master ~]# vim /opt/k8s/script/scp_controller_manager.sh

NODE_IPS=("192.168.10.108" "192.168.10.109" "192.168.10.110") for node_ip in ${NODE_IPS[@]};doecho ">>> ${node_ip}"ssh root@${node_ip} "chown k8s /opt/k8s/cert/*"scp /opt/k8s/cert/kube-controller-manager*.pem k8s@${node_ip}:/opt/k8s/cert/scp /root/.kube/kube-controller-manager.kubeconfig k8s@${node_ip}:/opt/k8s/ done

[root@kube-master ~]# chmod +x /opt/k8s/script/scp_controller_manager.sh && /opt/k8s/script/scp_controller_manager.sh

?

06-03-05 創(chuàng)建和分發(fā) kube-controller-manager systemd unit 文件

[root@kube-master ~]# mkdir /opt/controller_manager

[root@kube-master ~]# cd /opt/controller_manager

[root@kube-master controller_manager]# cat > kube-controller-manager.service <<EOF

[Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/GoogleCloudPlatform/kubernetes[Service] ExecStart=/opt/k8s/bin/kube-controller-manager \ --port=0 \ --secure-port=10252 \ --bind-address=127.0.0.1 \ --kubeconfig=/opt/k8s/kube-controller-manager.kubeconfig \ --service-cluster-ip-range=10.96.0.0/16 \ --cluster-name=kubernetes \ --cluster-signing-cert-file=/opt/k8s/cert/ca.pem \ --cluster-signing-key-file=/opt/k8s/cert/ca-key.pem \ --experimental-cluster-signing-duration=8760h \ --root-ca-file=/opt/k8s/cert/ca.pem \ --service-account-private-key-file=/opt/k8s/cert/ca-key.pem \ --leader-elect=true \ --feature-gates=RotateKubeletServerCertificate=true \ --controllers=*,bootstrapsigner,tokencleaner \ --horizontal-pod-autoscaler-use-rest-clients=true \ --horizontal-pod-autoscaler-sync-period=10s \ --tls-cert-file=/opt/k8s/cert/kube-controller-manager.pem \ --tls-private-key-file=/opt/k8s/cert/kube-controller-manager-key.pem \ --use-service-account-credentials=true \ --alsologtostderr=true \ --logtostderr=false \ --log-dir=/var/log/kubernetes \ --v=2 Restart=on Restart=on-failure RestartSec=5 User=k8s[Install] WantedBy=multi-user.target

注:

  • --port=0:關(guān)閉監(jiān)聽 http /metrics 的請求,同時(shí)?--address?參數(shù)無效,--bind-address?參數(shù)有效;
  • --secure-port=10252、--bind-address=0.0.0.0: 在所有網(wǎng)絡(luò)接口監(jiān)聽 10252 端口的 https /metrics 請求;
  • --kubeconfig:指定 kubeconfig 文件路徑,kube-controller-manager 使用它連接和驗(yàn)證 kube-apiserver;
  • --cluster-signing-*-file:簽名 TLS Bootstrap 創(chuàng)建的證書;
  • --experimental-cluster-signing-duration:指定 TLS Bootstrap 證書的有效期;
  • --root-ca-file:放置到容器 ServiceAccount 中的 CA 證書,用來對 kube-apiserver 的證書進(jìn)行校驗(yàn);
  • --service-account-private-key-file:簽名 ServiceAccount 中 Token 的私鑰文件,必須和 kube-apiserver 的?--service-account-key-file?指定的公鑰文件配對使用;
  • --service-cluster-ip-range?:指定 Service Cluster IP 網(wǎng)段,必須和 kube-apiserver 中的同名參數(shù)一致;
  • --leader-elect=true:集群運(yùn)行模式,啟用選舉功能;被選為 leader 的節(jié)點(diǎn)負(fù)責(zé)處理工作,其它節(jié)點(diǎn)為阻塞狀態(tài);
  • --feature-gates=RotateKubeletServerCertificate=true:開啟 kublet server 證書的自動(dòng)更新特性;
  • --controllers=*,bootstrapsigner,tokencleaner:啟用的控制器列表,tokencleaner 用于自動(dòng)清理過期的 Bootstrap token;
  • --horizontal-pod-autoscaler-*:custom metrics 相關(guān)參數(shù),支持 autoscaling/v2alpha1;
  • --tls-cert-file、--tls-private-key-file:使用 https 輸出 metrics 時(shí)使用的 Server 證書和秘鑰;
  • --use-service-account-credentials=true:
  • User=k8s:使用 k8s 賬戶運(yùn)行;

kube-controller-manager 不對請求 https metrics 的 Client 證書進(jìn)行校驗(yàn),故不需要指定?--tls-ca-file?參數(shù),而且該參數(shù)已被淘汰。

?

06-03-06 kube-controller-manager 的權(quán)限

  ClusteRole: system:kube-controller-manager 的權(quán)限很小,只能創(chuàng)建 secret、serviceaccount 等資源對象,各 controller 的權(quán)限分散到 ClusterRole system:controller:XXX 中。

  需要在 kube-controller-manager 的啟動(dòng)參數(shù)中添加?--use-service-account-credentials=true?參數(shù),這樣 main controller 會(huì)為各 controller 創(chuàng)建對應(yīng)的 ServiceAccount XXX-controller。

  內(nèi)置的 ClusterRoleBinding system:controller:XXX 將賦予各 XXX-controller ServiceAccount 對應(yīng)的 ClusterRole system:controller:XXX 權(quán)限。

?

06-03-07 分發(fā)systemd unit 文件到所有master 節(jié)點(diǎn);啟動(dòng)檢查 kube-controller-manager 服務(wù)

[root@kube-master ~]# vim /opt/k8s/script/controller_manager.sh

NODE_IPS=("192.168.10.108" "192.168.10.109" "192.168.10.110") for node_ip in ${NODE_IPS[@]};doecho ">>> ${node_ip}"scp /opt/controller_manager/kube-controller-manager.service root@${node_ip}:/etc/systemd/system/ssh root@${node_ip} "mkdir -p /opt/log/kubernetes && chown -R k8s /opt/log/kubernetes"ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-controller-manager && systemctl start kube-controller-manager" donefor node_ip in ${NODE_IPS[@]};doecho ">>> ${node_ip}"ssh k8s@${node_ip} "systemctl status kube-controller-manager|grep Active" done

[root@kube-master ~]# chmod +x /opt/k8s/script/controller_manager.sh && /opt/k8s/script/controller_manager.sh

?

06-03-08 查看輸出的 metric

注意:以下命令在 kube-controller-manager 節(jié)點(diǎn)上執(zhí)行。

[root@kube-master ~]# ss -nutlp |grep kube-controll

tcp LISTEN 0 128 127.0.0.1:10252 *:* users:(("kube-controller",pid=6532,fd=5))

?

[root@kube-master ~]# curl -s --cacert /opt/k8s/cert/ca.pem https://127.0.0.1:10252/metrics |head

# HELP ClusterRoleAggregator_adds Total number of adds handled by workqueue: ClusterRoleAggregator

# TYPE ClusterRoleAggregator_adds counter

ClusterRoleAggregator_adds 6

# HELP ClusterRoleAggregator_depth Current depth of workqueue: ClusterRoleAggregator

# TYPE ClusterRoleAggregator_depth gauge

ClusterRoleAggregator_depth 0

# HELP ClusterRoleAggregator_queue_latency How long an item stays in workqueueClusterRoleAggregator before being requested.

# TYPE ClusterRoleAggregator_queue_latency summary

ClusterRoleAggregator_queue_latency{quantile="0.5"} 431

ClusterRoleAggregator_queue_latency{quantile="0.9"} 85089

?

注:curl --cacert CA 證書用來驗(yàn)證 kube-controller-manager https server 證書;

?

06-03-09 測試 kube-controller-manager 集群的高可用

1、停掉一個(gè)或兩個(gè)節(jié)點(diǎn)的 kube-controller-manager 服務(wù),觀察其它節(jié)點(diǎn)的日志,看是否獲取了 leader 權(quán)限。

?

2、查看當(dāng)前的 leader

[root@kube-master ~]# kubectl get endpoints kube-controller-manager --namespace=kube-system -o yaml

apiVersion: v1

kind: Endpoints

metadata:

??annotations:

????control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"kube-master_53bc08b7-f69d-11e8-9e79-0050563ab62b","leaseDurationSeconds":15,"acquireTime":"2018-12-03T01:48:18Z","renewTime":"2018-12-03T01:59:15Z","leaderTransitions":5}'

??creationTimestamp: 2018-11-29T03:12:14Z

??name: kube-controller-manager

??namespace: kube-system

??resourceVersion: "56075"

??selfLink: /api/v1/namespaces/kube-system/endpoints/kube-controller-manager

??uid: 91e64a51-f384-11e8-a392-0050563ab62b

可見,當(dāng)前的 leader 為 kube-node1 節(jié)點(diǎn)。(本來是在kube-master節(jié)點(diǎn))

?

06-04.部署高可用 kube-scheduler 集群

  本文檔介紹部署高可用 kube-scheduler 集群的步驟。

  該集群包含 3 個(gè)節(jié)點(diǎn),啟動(dòng)后將通過競爭選舉機(jī)制產(chǎn)生一個(gè) leader 節(jié)點(diǎn),其它節(jié)點(diǎn)為阻塞狀態(tài)。當(dāng) leader 節(jié)點(diǎn)不可用后,剩余節(jié)點(diǎn)將再次進(jìn)行選舉產(chǎn)生新的 leader 節(jié)點(diǎn),從而保證服務(wù)的可用性。

  為保證通信安全,本文檔先生成 x509 證書和私鑰,kube-scheduler 在如下兩種情況下使用該證書:

① 與 kube-apiserver 的安全端口通信;

② 在安全端口(https,10251) 輸出 prometheus 格式的 metrics;

準(zhǔn)備工作:下載最新版本的二進(jìn)制文件、安裝和配置 flanneld

?

06-04-01 創(chuàng)建 kube-scheduler 證書和私鑰

創(chuàng)建證書簽名請求:

[root@kube-master ~]# cd /opt/k8s/cert/

[root@kube-master cert]# cat > kube-scheduler-csr.json <<EOF

{"CN": "system:kube-scheduler","hosts": ["127.0.0.1","192.168.10.108","192.168.10.109","192.168.10.110"],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "BeiJing","L": "BeiJing","O": "system:kube-scheduler","OU": "4Paradigm"}] }

EOF

注:

  • hosts 列表包含所有?kube-scheduler 節(jié)點(diǎn) IP;
  • CN 為 system:kube-scheduler、O 為 system:kube-scheduler,kubernetes 內(nèi)置的 ClusterRoleBindings system:kube-scheduler 將賦予 kube-scheduler 工作所需的權(quán)限。

?

06-04-02 生成證書和私鑰

[root@kube-master cert]# cfssl gencert -ca=/opt/k8s/cert/ca.pem \

-ca-key=/opt/k8s/cert/ca-key.pem \

-config=/opt/k8s/cert/ca-config.json \ -profile=kubernetes kube-scheduler-csr.json | cfssljson_linux-amd64 -bare kube-scheduler

[root@kube-master cert]# ls *scheduler*

kube-scheduler.csr kube-scheduler-csr.json kube-scheduler-key.pem kube-scheduler.pem

?

06-04-03 創(chuàng)建kubeconfig 文件

kubeconfig 文件包含訪問 apiserver 的所有信息,如 apiserver 地址、CA 證書和自身使用的證書;

① 執(zhí)行命令,生產(chǎn)kube-scheduler.kubeconfig文件

[root@kube-master ~]# kubectl config set-cluster kubernetes \

--certificate-authority=/opt/k8s/cert/ca.pem \

--embed-certs=true \

--server=https://192.168.10.10:8443 \

--kubeconfig=/root/.kube/kube-scheduler.kubeconfig

?

[root@kube-master ~]# kubectl config set-credentials system:kube-scheduler \

--client-certificate=/opt/k8s/cert/kube-scheduler.pem \

--client-key=/opt/k8s/cert/kube-scheduler-key.pem \

--embed-certs=true \

--kubeconfig=/root/.kube/kube-scheduler.kubeconfig

?

[root@kube-master ~]# kubectl config set-context system:kube-scheduler@kubernetes \

--cluster=kubernetes \

--user=system:kube-scheduler \

--kubeconfig=/root/.kube/kube-scheduler.kubeconfig

?

[root@kube-master ~]# kubectl config use-context system:kube-scheduler@kubernetes --kubeconfig=/root/.kube/kube-scheduler.kubeconfig

?

② 驗(yàn)證kube-controller-manager.kubeconfig文件

[root@kube-master cert]# ls /root/.kube/kube-scheduler.kubeconfig

/root/.kube/kube-scheduler.kubeconfig

[root@kube-master ~]# kubectl config view --kubeconfig=/root/.kube/kube-scheduler.kubeconfig

apiVersion: v1 clusters: - cluster:certificate-authority-data: REDACTEDserver: https://192.168.10.100:8443name: kubernetes contexts: - context:cluster: kubernetesuser: system:kube-schedulername: system:kube-scheduler@kubernetes current-context: system:kube-scheduler@kubernetes kind: Config preferences: {} users: - name: system:kube-scheduleruser:client-certificate-data: REDACTEDclient-key-data: REDACTED

?

06-04-04 分發(fā)生成的證書和私鑰、kubeconfig 到所有 master 節(jié)點(diǎn)

[root@kube-master ~]# vim /opt/k8s/script/scp_scheduler.sh

NODE_IPS=("192.168.10.108" "192.168.10.109" "192.168.10.110") for node_ip in ${NODE_IPS[@]};doecho ">>> ${node_ip}"ssh root@${node_ip} "chown k8s /opt/k8s/cert/*"scp /opt/k8s/cert/kube-scheduler*.pem k8s@${node_ip}:/opt/k8s/cert/scp /root/.kube/kube-scheduler.kubeconfig k8s@${node_ip}:/opt/k8s/ done

[root@kube-master ~]# chmod +x /opt/k8s/script/scp_scheduler.sh && /opt/k8s/script/scp_scheduler.sh

?

06-04-05 創(chuàng)建kube-scheduler systemd unit 文件

[root@kube-master ~]# mkdir /opt/scheduler

[root@kube-master ~]# cd /opt/scheduler

[root@kube-master scheduler]# cat > kube-scheduler.service <<EOF

[Unit] Description=Kubernetes Scheduler Documentation=https://github.com/GoogleCloudPlatform/kubernetes[Service] ExecStart=/opt/k8s/bin/kube-scheduler \\--address=127.0.0.1 \\--kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \\--leader-elect=true \\--alsologtostderr=true \\--logtostderr=false \\--log-dir=/var/log/kubernetes \\--v=2 Restart=on-failure RestartSec=5 User=k8s[Install] WantedBy=multi-user.target

EOF

注:

  • --address:在 127.0.0.1:10251 端口接收 http /metrics 請求;kube-scheduler 目前還不支持接收 https 請求;
  • --kubeconfig:指定 kubeconfig 文件路徑,kube-scheduler 使用它連接和驗(yàn)證 kube-apiserver;
  • --leader-elect=true:集群運(yùn)行模式,啟用選舉功能;被選為 leader 的節(jié)點(diǎn)負(fù)責(zé)處理工作,其它節(jié)點(diǎn)為阻塞狀態(tài);
  • User=k8s:使用 k8s 賬戶運(yùn)行;

?

06-04-06 分發(fā)systemd unit 文件到所有master 節(jié)點(diǎn);啟動(dòng)檢查kube-scheduler 服務(wù)

[root@kube-master scheduler]# vim /opt/k8s/script/scheduler.sh

NODE_IPS=("192.168.10.108" "192.168.10.109" "192.168.10.110") for node_ip in ${NODE_IPS[@]};doecho ">>> ${node_ip}"scp /opt/scheduler/kube-scheduler.service root@${node_ip}:/etc/systemd/system/ssh root@${node_ip} "mkdir -p /opt/log/kubernetes && chown -R k8s /opt/log/kubernetes"ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-scheduler && systemctl start kube-scheduler" donefor node_ip in ${NODE_IPS[@]};doecho ">>> ${node_ip}"ssh k8s@${node_ip} "systemctl status kube-scheduler|grep Active" done

[root@kube-master scheduler]# chmod +x /opt/k8s/script/scheduler.sh && /opt/k8s/script/scheduler.sh

確保狀態(tài)為?active (running),否則查看日志,確認(rèn)原因:

journalctl -u kube-scheduler

?

06-04-07 查看輸出的 metric

注意:以下命令在 kube-scheduler 節(jié)點(diǎn)上執(zhí)行。

kube-scheduler 監(jiān)聽 10251 端口,接收 http 請求:

[root@kube-master ~]# ss -nutlp |grep kube-scheduler

tcp   LISTEN   0   128 127.0.0.1:10251   *:*   users:(("kube-scheduler",pid=14968,fd=8))

[root@kube-master ~]# curl -s http://127.0.0.1:10251/metrics |head

# HELP apiserver_audit_event_total Counter of audit events generated and sent to the audit backend.

# TYPE apiserver_audit_event_total counter

apiserver_audit_event_total 0

# HELP go_gc_duration_seconds A summary of the GC invocation durations.

# TYPE go_gc_duration_seconds summary

go_gc_duration_seconds{quantile="0"} 3.6554e-05

go_gc_duration_seconds{quantile="0.25"} 0.000133804

go_gc_duration_seconds{quantile="0.5"} 0.000203523

go_gc_duration_seconds{quantile="0.75"} 0.000683624

go_gc_duration_seconds{quantile="1"} 0.001188571

?

06-04-08 測試 kube-scheduler 集群的高可用

1、隨便找一個(gè)或兩個(gè) master 節(jié)點(diǎn),停掉 kube-scheduler 服務(wù),看其它節(jié)點(diǎn)是否獲取了 leader 權(quán)限(systemd 日志)。

?

2、查看當(dāng)前的 leader

[root@kube-master ~]# kubectl get endpoints kube-scheduler --namespace=kube-system -o yaml

apiVersion: v1

kind: Endpoints

metadata:

??annotations:

????control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"kube-node1_531fab4b-f69d-11e8-ba0a-00505631d257","leaseDurationSeconds":15,"acquireTime":"2018-12-03T01:48:23Z","renewTime":"2018-12-03T02:02:28Z","leaderTransitions":4}'

??creationTimestamp: 2018-11-29T05:50:35Z

??name: kube-scheduler

??namespace: kube-system

??resourceVersion: "56324"

??selfLink: /api/v1/namespaces/kube-system/endpoints/kube-scheduler

??uid: b1435e86-f39a-11e8-a392-0050563ab62b

可見,當(dāng)前的 leader 為 kube-node2 節(jié)點(diǎn)。(本來是在kube-master節(jié)點(diǎn))

?

07.部署 worker 節(jié)點(diǎn)

kubernetes work 節(jié)點(diǎn)運(yùn)行如下組件:

  • docker
  • kubelet
  • kube-proxy

1、安裝和配置 flanneld

參考?05.部署 flannel 網(wǎng)絡(luò)

?

2、安裝依賴包

CentOS:

$ yum install -y epel-release

$ yum install -y conntrack ipvsadm ipset jq iptables curl sysstat libseccomp && /usr/sbin/modprobe ip_vs

?

Ubuntu:

$ apt-get install -y conntrack ipvsadm ipset jq iptables curl sysstat libseccomp && /usr/sbin/modprobe ip_vs

?

07-01.部署 docker 組件

docker 是容器的運(yùn)行環(huán)境,管理它的生命周期。kubelet 通過 Container Runtime Interface (CRI) 與 docker 進(jìn)行交互。

07-01-01 下載docker 二進(jìn)制文件

到?https://download.docker.com/linux/static/stable/x86_64/?頁面下載最新發(fā)布包:

wget https://download.docker.com/linux/static/stable/x86_64/docker-18.03.1-ce.tgz tar -xvf docker-18.03.1-ce.tgz

?

07-01-02 創(chuàng)建和分發(fā) systemd unit 文件

[root@kube-master ~]# mkdir /opt/docker

[root@kube-master ~]# cd /opt/

[root@kube-master docker]# cat > docker.service << "EOF"

[Unit] Description=Docker Application Container Engine Documentation=http://docs.docker.io[Service] Environment="PATH=/opt/k8s/bin:/bin:/sbin:/usr/bin:/usr/sbin" EnvironmentFile=-/run/flannel/docker ExecStart=/opt/k8s/bin/dockerd --log-level=error $DOCKER_NETWORK_OPTIONS ExecReload=/bin/kill -s HUP $MAINPID Restart=on-failure RestartSec=5 LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity Delegate=yes KillMode=process[Install] WantedBy=multi-user.target

EOF

  • EOF 前后有雙引號(hào),這樣 bash 不會(huì)替換文檔中的變量,如 $DOCKER_NETWORK_OPTIONS;
  • dockerd 運(yùn)行時(shí)會(huì)調(diào)用其它 docker 命令,如 docker-proxy,所以需要將 docker 命令所在的目錄加到 PATH 環(huán)境變量中;
  • flanneld 啟動(dòng)時(shí)將網(wǎng)絡(luò)配置寫入?/run/flannel/docker?文件中,dockerd 啟動(dòng)前讀取該文件中的環(huán)境變量?DOCKER_NETWORK_OPTIONS?,然后設(shè)置 docker0 網(wǎng)橋網(wǎng)段;
  • 如果指定了多個(gè)?EnvironmentFile?選項(xiàng),則必須將?/run/flannel/docker?放在最后(確保 docker0 使用 flanneld 生成的 bip 參數(shù));
  • docker 需要以 root 用于運(yùn)行;
  • docker 從 1.13 版本開始,可能將?iptables FORWARD chain的默認(rèn)策略設(shè)置為DROP,從而導(dǎo)致 ping 其它 Node 上的 Pod IP 失敗,遇到這種情況時(shí),需要手動(dòng)設(shè)置策略為?ACCEPT:$ sudo iptables -P FORWARD ACCEPT;并且把以下命令寫入?/etc/rc.local?文件中,防止節(jié)點(diǎn)重啟iptables FORWARD chain的默認(rèn)策略又還原為DROP:$ /sbin/iptables -P FORWARD ACCEPT

?

07-01-03 配置docker 配置文件

使用國內(nèi)的倉庫鏡像服務(wù)器以加快 pull image 的速度,同時(shí)增加下載的并發(fā)數(shù) (需要重啟 dockerd 生效):

cat > docker-daemon.json <<EOF

{"registry-mirrors": ["https://hub-mirror.c.163.com", "https://docker.mirrors.ustc.edu.cn"],"max-concurrent-downloads": 20 }

EOF

?

07-01-04 分發(fā)docker 二進(jìn)制文件、systemd unit 文件、docker 配置文件到所有 worker 機(jī)器

[root@kube-master ~]# vim /opt/k8s/script/scp_docker.sh

NODE_IPS=("192.168.10.108" "192.168.10.109" "192.168.10.110") for node_ip in ${NODE_IPS[@]};doecho ">>> ${node_ip}"scp /root/docker/docker* k8s@${node_ip}:/opt/k8s/bin/ssh k8s@${node_ip} "chmod +x /opt/k8s/bin/*"scp /opt/docker/docker.service root@${node_ip}:/etc/systemd/system/ssh root@${node_ip} "mkdir -p /opt/docker/"scp /opt/docker/docker-daemon.json root@${node_ip}:/opt/docker/daemon.json done

?

07-01-05 啟動(dòng)并檢查 docker 服務(wù)

[root@kube-master ~]# vim /opt/k8s/script/docker.sh

NODE_IPS=("192.168.10.108" "192.168.10.109" "192.168.10.110") for node_ip in ${NODE_IPS[@]};doecho ">>> ${node_ip}"ssh root@${node_ip} "systemctl stop firewalld && systemctl disable firewalld"ssh root@${node_ip} "/usr/sbin/iptables -F && /usr/sbin/iptables -X && /usr/sbin/iptables -F -t nat && /usr/sbin/iptables -X -t nat"ssh root@${node_ip} "/usr/sbin/iptables -P FORWARD ACCEPT"ssh root@${node_ip} "systemctl daemon-reload && systemctl enable docker && systemctl restart docker"ssh root@${node_ip} 'for intf in /sys/devices/virtual/net/docker0/brif/*; do echo 1 > $intf/hairpin_mode; done'ssh root@${node_ip} "sudo sysctl -p /etc/sysctl.d/kubernetes.conf"#檢查服務(wù)運(yùn)行狀態(tài)ssh k8s@${node_ip} "systemctl status docker|grep Active"#檢查 docker0 網(wǎng)橋ssh k8s@${node_ip} "/usr/sbin/ip addr show flannel.1 && /usr/sbin/ip addr show docker0" done

注:

  • 關(guān)閉 firewalld(centos7)/ufw(ubuntu16.04),否則可能會(huì)重復(fù)創(chuàng)建 iptables 規(guī)則;
  • 清理舊的 iptables rules 和 chains 規(guī)則;
  • 開啟 docker0 網(wǎng)橋下虛擬網(wǎng)卡的 hairpin 模式;

?

[root@kube-master ~]# chmod +x /opt/k8s/script/docker.sh && /opt/k8s/script/docker.sh

① 確保狀態(tài)為?active (running),否則查看日志,確認(rèn)原因:

$ journalctl -u docker

② 確認(rèn)各 work 節(jié)點(diǎn)的 docker0 網(wǎng)橋和 flannel.1 接口的 IP 處于同一個(gè)網(wǎng)段中(如下10.30.22.0和 10.30.22.1):

4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN

????link/ether ea:b3:44:ab:36:16 brd ff:ff:ff:ff:ff:ff

????inet 10.30.89.0/32 scope global flannel.1

???????valid_lft forever preferred_lft forever

7: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP

????link/ether 02:42:8e:6e:ea:ef brd ff:ff:ff:ff:ff:ff

????inet 10.30.89.1/24 brd 10.30.89.255 scope global docker0

???????valid_lft forever preferred_lft forever

?

07-02.部署 kubelet 組件

  kublet 運(yùn)行在每個(gè) worker 節(jié)點(diǎn)上,接收 kube-apiserver 發(fā)送的請求,管理 Pod 容器,執(zhí)行交互式命令,如 exec、run、logs 等。

  kublet 啟動(dòng)時(shí)自動(dòng)向 kube-apiserver 注冊節(jié)點(diǎn)信息,內(nèi)置的 cadvisor 統(tǒng)計(jì)和監(jiān)控節(jié)點(diǎn)的資源使用情況。

  為確保安全,本文檔只開啟接收 https 請求的安全端口,對請求進(jìn)行認(rèn)證和授權(quán),拒絕未授權(quán)的訪問(如 apiserver、heapster)。

?

1、下載和分發(fā) kubelet 二進(jìn)制文件

參考?06.部署master節(jié)點(diǎn).md

?

2、安裝依賴包

參考?07部署worker節(jié)點(diǎn).md

?

07-02-01 創(chuàng)建 kubelet bootstrap kubeconfig 文件

[root@kube-master ~]# vim /opt/k8s/script/bootstrap_kubeconfig.sh

NODE_NAMES=("kube-master" "kube-node1" "kube-node2") for node_name in ${NODE_NAMES[@]};doecho ">>> ${node_name}"# 創(chuàng)建 tokenexport BOOTSTRAP_TOKEN=$(kubeadm token create \--description kubelet-bootstrap-token \--groups system:bootstrappers:${node_name} \--kubeconfig ~/.kube/config)# 設(shè)置集群參數(shù)kubectl config set-cluster kubernetes \--certificate-authority=/opt/k8s/cert/ca.pem \--embed-certs=true \--server=https://192.168.10.10:8443 \--kubeconfig=~/.kube/kubelet-bootstrap-${node_name}.kubeconfig# 設(shè)置客戶端認(rèn)證參數(shù)kubectl config set-credentials kubelet-bootstrap \--token=${BOOTSTRAP_TOKEN} \--kubeconfig=~/.kube/kubelet-bootstrap-${node_name}.kubeconfig# 設(shè)置上下文參數(shù)kubectl config set-context default \--cluster=kubernetes \--user=kubelet-bootstrap \--kubeconfig=~/.kube/kubelet-bootstrap-${node_name}.kubeconfig# 設(shè)置默認(rèn)上下文kubectl config use-context default --kubeconfig=~/.kube/kubelet-bootstrap-${node_name}.kubeconfig done

[root@kube-master ~]# chmod +x /opt/k8s/script/bootstrap_kubeconfig.sh && /opt/k8s/script/bootstrap_kubeconfig.sh

注:

① 證書中寫入 Token 而非證書,證書后續(xù)由 controller-manager 創(chuàng)建。

查看 kubeadm 為各節(jié)點(diǎn)創(chuàng)建的 token:

[root@kube-master ~]# kubeadm token list --kubeconfig ~/.kube/config

TOKEN          ? TTL? ?EXPIRES         ??USAGES       DESCRIPTION     EXTRA GROUPS

8hpvxm.w5uctmxzlphfh37l   23h? ?2018-11-30T16:03:27+08:00? authentication,signing? ?kubelet-bootstrap-token? system:bootstrappers:kube-node1

gktdpg.5x931bwfzf4z4hjt   23h? ?2018-11-30T16:03:27+08:00? authentication,signing? ?kubelet-bootstrap-token? system:bootstrappers:kube-node2

ttbgfq.19zeet23eohtdo65   23h? ?2018-11-30T16:03:26+08:00? authentication,signing? ?kubelet-bootstrap-token? system:bootstrappers:kube-master

?

② 創(chuàng)建的 token 有效期為 1 天,超期后將不能再被使用,且會(huì)被 kube-controller-manager 的 tokencleaner 清理(如果啟用該 controller 的話);

?

③ kube-apiserver 接收 kubelet 的 bootstrap token 后,將請求的 user 設(shè)置為 system:bootstrap:,group 設(shè)置為 system:bootstrappers;

各 token 關(guān)聯(lián)的 Secret:

[root@kube-master ~]# kubectl get secrets -n kube-system

NAME           TYPE               DATA? AGE

bootstrap-token-8hpvxm    bootstrap.kubernetes.io/token  ?   7   7m

bootstrap-token-gktdpg   ? ?bootstrap.kubernetes.io/token  ?   7   7m

bootstrap-token-ttbgfq     bootstrap.kubernetes.io/token  ? ?  7   7m

default-token-5lvn4     kubernetes.io/service-account-token? ? 3   4h

?

07-02-02 創(chuàng)建kubelet 參數(shù)配置文件

從 v1.10 開始,kubelet?部分參數(shù)需在配置文件中配置,kubelet --help?會(huì)提示:

DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag

[root@kube-master ~]# mkdir /opt/kubelet

[root@kube-master ~]# cd /opt/kubelet

[root@kube-master kubelet]# vim kubelet.config.json.template

{"kind": "KubeletConfiguration","apiVersion": "kubelet.config.k8s.io/v1beta1","authentication": {"x509": {"clientCAFile": "/opt/k8s/cert/ca.pem"},"webhook": {"enabled": true,"cacheTTL": "2m0s"},"anonymous": {"enabled": false}},"authorization": {"mode": "Webhook","webhook": {"cacheAuthorizedTTL": "5m0s","cacheUnauthorizedTTL": "30s"}},"address": "##NODE_IP##","port": 10250,"readOnlyPort": 0,"cgroupDriver": "cgroupfs","hairpinMode": "promiscuous-bridge","serializeImagePulls": false,"featureGates": {"RotateKubeletClientCertificate": true,"RotateKubeletServerCertificate": true},"clusterDomain": "cluster.local","clusterDNS": ["10.90.0.2"] }

?

07-02-03 分發(fā) bootstrap kubeconfig 、kubelet 配置文件到所有 worker 節(jié)點(diǎn)

[root@kube-master ~]# vim /opt/k8s/script/scp_kubelet.sh

NODE_IPS=("192.168.10.108" "192.168.10.109" "192.168.10.110") NODE_NAMES=("kube-master" "kube-node1" "kube-node2") for node_name in ${NODE_NAMES[@]};doecho ">>> ${node_name}"scp ~/.kube/kubelet-bootstrap-${node_name}.kubeconfig k8s@${node_name}:/opt/k8s/kubelet-bootstrap.kubeconfig donefor node_ip in ${NODE_IPS[@]};doecho ">>> ${node_ip}"sed -e "s/##NODE_IP##/${node_ip}/" /opt/kubelet/kubelet.config.json.template > /opt/kubelet/kubelet.config-${node_ip}.jsonscp /opt/kubelet/kubelet.config-${node_ip}.json root@${node_ip}:/opt/k8s/kubelet.config.json done

[root@kube-master ~]# chmod +x /opt/k8s/script/scp_kubelet.sh && /opt/k8s/script/scp_kubelet.sh

?

07-02-04 創(chuàng)建kubelet systemd unit 文件

[root@kube-master ~]# vim /opt/kubelet/kubelet.service.template

[Unit] Description=Kubernetes Kubelet Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=docker.service Requires=docker.service[Service] WorkingDirectory=/opt/lib/kubelet ExecStart=/opt/k8s/bin/kubelet \ --bootstrap-kubeconfig=/opt/k8s/kubelet-bootstrap.kubeconfig \ --cert-dir=/opt/k8s/cert \ --kubeconfig=/opt/k8s/kubelet.kubeconfig \ --config=/opt/k8s/kubelet.config.json \ --hostname-override=##NODE_NAME## \ --pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest \ --allow-privileged=true \ --alsologtostderr=true \ --logtostderr=false \ --log-dir=/opt/log/kubernetes \ --v=2 Restart=on-failure RestartSec=5[Install] WantedBy=multi-user.target

?

07-02-05 Bootstrap Token Auth 和授予權(quán)限

1、kublet 啟動(dòng)時(shí)查找配置的 --kubeletconfig 文件是否存在,如果不存在則使用 --bootstrap-kubeconfig 向 kube-apiserver 發(fā)送證書簽名請求 (CSR)。

?

2、kube-apiserver 收到 CSR 請求后,對其中的 Token 進(jìn)行認(rèn)證(事先使用 kubeadm 創(chuàng)建的 token),認(rèn)證通過后將請求的 user 設(shè)置為 system:bootstrap:,group 設(shè)置為 system:bootstrappers,這一過程稱為 Bootstrap Token Auth。

?

3、默認(rèn)情況下,這個(gè) user 和 group 沒有創(chuàng)建 CSR 的權(quán)限,kubelet 啟動(dòng)失敗,錯(cuò)誤日志如下:

$ sudo journalctl -u kubelet -a |grep -A 2 'certificatesigningrequests' May 06 06:42:36 kube-node1 kubelet[26986]: F0506 06:42:36.314378 26986 server.go:233] failed to run Kubelet: cannot create certificate signing request: certificatesigningrequests.certificates.k8s.io is forbidden: User "system:bootstrap:lemy40" cannot create certificatesigningrequests.certificates.k8s.io at the cluster scope May 06 06:42:36 kube-node1 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a May 06 06:42:36 kube-node1 systemd[1]: kubelet.service: Failed with result 'exit-code'.

?

4、解決辦法是:創(chuàng)建一個(gè) clusterrolebinding,將 group system:bootstrappers 和 clusterrole system:node-bootstrapper 綁定:

[root@kube-master ~]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --group=system:bootstrappers

?

07-02-06 啟動(dòng) kubelet 服務(wù)

[root@kube-master ~]# vim /opt/k8s/script/kubelet.sh

NODE_IPS=("192.168.10.108" "192.168.10.109" "192.168.10.110") NODE_NAMES=("kube-master" "kube-node1" "kube-node2") #分發(fā)kubelet systemd unit 文件 for node_name in ${NODE_NAMES[@]};do echo ">>> ${node_name}"sed -e "s/##NODE_NAME##/${node_name}/" /opt/kubelet/kubelet.service.template > /opt/kubelet/kubelet-${node_name}.servicescp /opt/kubelet/kubelet-${node_name}.service root@${node_name}:/etc/systemd/system/kubelet.service done #開啟檢查kubelet 服務(wù) for node_ip in ${NODE_IPS[@]};dossh root@${node_ip} "mkdir -p /opt/lib/kubelet"ssh root@${node_ip} "/usr/sbin/swapoff -a"ssh root@${node_ip} "mkdir -p /opt/log/kubernetes && chown -R k8s /opt/log/kubernetes"ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kubelet && systemctl restart kubelet"ssh root@${node_ip} "systemctl status kubelet |grep active" done?

注:

  • 關(guān)閉 swap 分區(qū),注意/etc/fstab 要設(shè)為開機(jī)不啟動(dòng)swap分區(qū),否則 kubelet 會(huì)啟動(dòng)失敗;
  • 必須先創(chuàng)建工作和日志目錄;
  • kubelet 啟動(dòng)后使用 --bootstrap-kubeconfig 向 kube-apiserver 發(fā)送 CSR 請求,當(dāng)這個(gè) CSR 被 approve 后,kube-controller-manager 為 kubelet 創(chuàng)建 TLS 客戶端證書、私鑰和 --kubeletconfig 文件。
  • kube-controller-manager 需要配置?--cluster-signing-cert-file?和?--cluster-signing-key-file?參數(shù),才會(huì)為 TLS Bootstrap 創(chuàng)建證書和私鑰。

?

07-02-07 approve kubelet CSR 請求

可以手動(dòng)或自動(dòng) approve CSR 請求。推薦使用自動(dòng)的方式,因?yàn)閺?v1.8 版本開始,可以自動(dòng)輪轉(zhuǎn)approve csr 后生成的證書。

1、手動(dòng) approve CSR 請求

(1)查看 CSR 列表:

[root@kube-master ~]# kubectl get csr

NAME AGE REQUESTOR CONDITION

node-csr-SdkiSnAdFByBTIJDyFWTBSTIDMJKxwxQt9gEExFX5HU 4m system:bootstrap:8hpvxm Pending

node-csr-atMwF8GpKbDEcGjzCTXF1NYo9Jc1AzE2yQoxaU8NAkw 7m system:bootstrap:ttbgfq Pending

node-csr-qxa30a9GRg35iNEl3PYZOIICMo_82qPrqNu6PizEZXw 4m system:bootstrap:gktdpg Pending

三個(gè) work 節(jié)點(diǎn)的 csr 均處于 pending 狀態(tài);

?

(2)approve CSR:

[root@kube-master ~]# kubectl certificate approve node-csr-SdkiSnAdFByBTIJDyFWTBSTIDMJKxwxQt9gEExFX5HU

certificatesigningrequest.certificates.k8s.io "node-csr-SdkiSnAdFByBTIJDyFWTBSTIDMJKxwxQt9gEExFX5HU" approved

?

(3)查看 Approve 結(jié)果:

[root@kube-master ~]# kubectl describe csr node-csr-SdkiSnAdFByBTIJDyFWTBSTIDMJKxwxQt9gEExFX5HU

Name: node-csr-SdkiSnAdFByBTIJDyFWTBSTIDMJKxwxQt9gEExFX5HU

Labels: <none>

Annotations: <none>

CreationTimestamp: Thu, 29 Nov 2018 17:51:43 +0800

Requesting User: system:bootstrap:8hpvxm

Status: Approved,Issued

Subject:

Common Name: system:node:kube-node1

Serial Number:

Organization: system:nodes

Events: <none>

?

2、自動(dòng) approve CSR 請求

(1)創(chuàng)建三個(gè) ClusterRoleBinding,分別用于自動(dòng) approve client、renew client、renew server 證書:

[root@kube-master ~]# cat > /opt/kubelet/csr-crb.yaml <<EOF

# Approve all CSRs for the group "system:bootstrappers" kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: auto-approve-csrs-for-group subjects: - kind: Group name:

# Approve all CSRs for the group "system:bootstrappers"kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata:name: auto-approve-csrs-for-groupsubjects:- kind: Groupname: system:bootstrappersapiGroup: rbac.authorization.k8s.ioroleRef:kind: ClusterRolename: system:certificates.k8s.io:certificatesigningrequests:nodeclientapiGroup: rbac.authorization.k8s.io ---# To let a node of the group "system:nodes" renew its own credentialskind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata:name: node-client-cert-renewalsubjects:- kind: Groupname: system:nodesapiGroup: rbac.authorization.k8s.ioroleRef:kind: ClusterRolename: system:certificates.k8s.io:certificatesigningrequests:selfnodeclientapiGroup: rbac.authorization.k8s.io --- # A ClusterRole which instructs the CSR approver to approve a node requesting a # serving cert matching its client cert. kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata:name: approve-node-server-renewal-csr rules: - apiGroups: ["certificates.k8s.io"]resources: ["certificatesigningrequests/selfnodeserver"]verbs: ["create"] ---# To let a node of the group "system:nodes" renew its own server credentialskind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata:name: node-server-cert-renewalsubjects:- kind: Groupname: system:nodesapiGroup: rbac.authorization.k8s.ioroleRef:kind: ClusterRolename: approve-node-server-renewal-csrapiGroup: rbac.authorization.k8s.io

EOF

注:

  • auto-approve-csrs-for-group:自動(dòng) approve node 的第一次 CSR; 注意第一次 CSR 時(shí),請求的 Group 為 system:bootstrappers;
  • node-client-cert-renewal:自動(dòng) approve node 后續(xù)過期的 client 證書,自動(dòng)生成的證書 Group 為 system:nodes;
  • node-server-cert-renewal:自動(dòng) approve node 后續(xù)過期的 server 證書,自動(dòng)生成的證書 Group 為 system:nodes;

?

(2)生效配置:

[root@kube-master ~]# $ kubectl apply -f /opt/kubelet/csr-crb.yaml

?

07-02-08 查看 kublet 的情況

1、等待一段時(shí)間(1-10 分鐘),三個(gè)節(jié)點(diǎn)的 CSR 都被自動(dòng) approve:

[root@kube-master ~]# kubectl get csr

NAME AGE REQUESTOR CONDITION

csr-kvbtt 15h system:node:kube-node1 Approved,Issued

csr-p9b9s 15h system:node:kube-node2 Approved,Issued

csr-rjpr9 15h system:node:kube-master Approved,Issued

node-csr-8Sr42M0z_LzZeHU-RCbgOynJm3Z2TsSXHuAlohfJiIM 15h system:bootstrap:ttbgfq Approved,Issued

node-csr-SdkiSnAdFByBTIJDyFWTBSTIDMJKxwxQt9gEExFX5HU 15h system:bootstrap:8hpvxm Approved,Issued

node-csr-atMwF8GpKbDEcGjzCTXF1NYo9Jc1AzE2yQoxaU8NAkw 15h system:bootstrap:ttbgfq Approved,Issued

node-csr-elVB0jp36nOHuOYlITWDZx8LoO2Ly4aW0VqgYxw_Te0 15h system:bootstrap:gktdpg Approved,Issued

node-csr-muNcDteZINLZnSv8FkhOMaP2ob5uw82PGwIAynNNrco 15h system:bootstrap:ttbgfq Approved,Issued

node-csr-qxa30a9GRg35iNEl3PYZOIICMo_82qPrqNu6PizEZXw 15h system:bootstrap:gktdpg Approved,Issued

?

2、所有節(jié)點(diǎn)均 ready:

[root@kube-master ~]# kubectl get nodes

NAME STATUS ROLES AGE VERSION

kube-master Ready <none> 25s v1.10.4

kube-node1 Ready <none> 7m v1.10.4

kube-node2 Ready <none> 21s v1.10.4

?

3、kube-controller-manager 為各 node 生成了 kubeconfig 文件和公私鑰:

[root@kube-master ~]# ll /opt/k8s/kubelet.kubeconfig

-rw------- 1 root root 2280 Nov 29 18:05 /opt/k8s/kubelet.kubeconfig

[root@kube-master ~]# ll /opt/k8s/cert/ |grep kubelet

-rw-r--r-- 1 root root 1050 Nov 29 18:05 kubelet-client.crt

-rw------- 1 root root 227 Nov 29 18:01 kubelet-client.key

-rw------- 1 root root 1338 Nov 29 18:05 kubelet-server-2018-11-29-18-05-11.pem

lrwxrwxrwx 1 root root 52 Nov 29 18:05 kubelet-server-current.pem -> /opt/k8s/cert/kubelet-server-2018-11-29-18-05-11.pem

注:kubelet-server 證書會(huì)周期輪轉(zhuǎn);

?

07-02-09 kubelet 提供的 API 接口

1、kublet 啟動(dòng)后監(jiān)聽多個(gè)端口,用于接收 kube-apiserver 或其它組件發(fā)送的請求:

[root@kube-master ~]# ss -nutlp |grep kubelet

tcp LISTEN 0 128 192.168.10.108:10250 *:* users:(("kubelet",pid=2797,fd=22))

tcp LISTEN 0 128 192.168.10.108:4194 *:* users:(("kubelet",pid=2797,fd=13))

tcp LISTEN 0 128 127.0.0.1:10248 *:* users:(("kubelet",pid=2797,fd=32))

注:

  • 4194: cadvisor http 服務(wù);
  • 10248: healthz http 服務(wù);
  • 10250: https API 服務(wù);注意:未開啟只讀端口 10255;

?

2、例如執(zhí)行?kubectl ec -it nginx-ds-5rmws -- sh?命令時(shí),kube-apiserver 會(huì)向 kubelet 發(fā)送如下請求:

POST /exec/default/nginx-ds-5rmws/my-nginx?command=sh&input=1&output=1&tty=1

?

3、kubelet 接收 10250 端口的 https 請求:

  • /pods、/runningpods
  • /metrics、/metrics/cadvisor、/metrics/probes
  • /spec
  • /stats、/stats/container
  • /logs
  • /run/、"/exec/", "/attach/", "/portForward/", "/containerLogs/" 等管理;

?

4、由于關(guān)閉了匿名認(rèn)證,同時(shí)開啟了 webhook 授權(quán),所有訪問 10250 端口 https API 的請求都需要被認(rèn)證和授權(quán)。

預(yù)定義的 ClusterRole system:kubelet-api-admin 授予訪問 kubelet 所有 API 的權(quán)限:

[root@kube-master ~]# kubectl describe clusterrole system:kubelet-api-admin

Name: system:kubelet-api-admin

Labels: kubernetes.io/bootstrapping=rbac-defaults

Annotations: rbac.authorization.kubernetes.io/autoupdate=true

PolicyRule:

Resources Non-Resource URLs Resource Names Verbs

--------- ----------------- -------------- -----

nodes [] [] [get list watch proxy]

nodes/log [] [] [*]

nodes/metrics [] [] [*]

nodes/proxy [] [] [*]

nodes/spec [] [] [*]

nodes/stats [] [] [*]

?

07-02-10 kublet api 認(rèn)證和授權(quán)

1、kublet 配置了如下認(rèn)證參數(shù):

  • authentication.anonymous.enabled:設(shè)置為 false,不允許匿名訪問 10250 端口;
  • authentication.x509.clientCAFile:指定簽名客戶端證書的 CA 證書,開啟 HTTPs 證書認(rèn)證;
  • authentication.webhook.enabled=true:開啟 HTTPs bearer token 認(rèn)證;

同時(shí)配置了如下授權(quán)參數(shù):

  • authroization.mode=Webhook:開啟 RBAC 授權(quán);

?

2、kubelet 收到請求后,使用 clientCAFile 對證書簽名進(jìn)行認(rèn)證,或者查詢 bearer token 是否有效。如果兩者都沒通過,則拒絕請求,提示 Unauthorized:

[root@kube-master ~]# curl -s --cacert /opt/k8s/cert/ca.pem https://192.168.10.109:10250/metrics

Unauthorized

[root@kube-master ~]# curl -s --cacert /opt/k8s/cert/ca.pem -H "Authorization: Bearer 123456" https://192.168.10.109:10250/metrics

Unauthorized

?

3、通過認(rèn)證后,kubelet 使用 SubjectAccessReview API 向 kube-apiserver 發(fā)送請求,查詢證書或 token 對應(yīng)的 user、group 是否有操作資源的權(quán)限(RBAC);

證書認(rèn)證和授權(quán):

$ 權(quán)限不足的證書;

[root@kube-master ~]# curl -s --cacert /opt/k8s/cert/ca.pem --cert /opt/k8s/cert/kube-controller-manager.pem --key /opt/k8s/cert/kube-controller-manager-key.pem https://192.168.10.109:10250/metrics

Forbidden (user=system:kube-controller-manager, verb=get, resource=nodes, subresource=metrics)

$ 使用部署 kubectl 命令行工具時(shí)創(chuàng)建的、具有最高權(quán)限的 admin 證書;

[root@kube-master cert]# curl -s --cacert /opt/k8s/cert/ca.pem --cert /opt/k8s/cert/admin.pem --key /opt/k8s/cert/admin-key.pem https://192.168.10.109:10250/metrics|head

# HELP apiserver_client_certificate_expiration_seconds Distribution of the remaining lifetime on the certificate used to authenticate a request.

# TYPE apiserver_client_certificate_expiration_seconds histogram

apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0

apiserver_client_certificate_expiration_seconds_bucket{le="21600"} 0

apiserver_client_certificate_expiration_seconds_bucket{le="43200"} 0

apiserver_client_certificate_expiration_seconds_bucket{le="86400"} 0

apiserver_client_certificate_expiration_seconds_bucket{le="172800"} 0

apiserver_client_certificate_expiration_seconds_bucket{le="345600"} 0

apiserver_client_certificate_expiration_seconds_bucket{le="604800"} 0

apiserver_client_certificate_expiration_seconds_bucket{le="2.592e+06"} 0

  • --cacert、--cert、--key?的參數(shù)值必須是文件路徑,如上面的/opt/k8s/cert/admin.pem?不能省略?./,否則返回?401 Unauthorized;

?

4、bear token 認(rèn)證和授權(quán):

  創(chuàng)建一個(gè) ServiceAccount,將它和 ClusterRole system:kubelet-api-admin 綁定,從而具有調(diào)用 kubelet API 的權(quán)限:

[root@kube-master ~]# kubectl create sa kubelet-api-test

serviceaccount "kubelet-api-test" created

[root@kube-master ~]# kubectl create clusterrolebinding kubelet-api-test --clusterrole=system:kubelet-api-admin --serviceaccount=default:kubelet-api-test

clusterrolebinding.rbac.authorization.k8s.io "kubelet-api-test" created

[root@kube-master ~]# SECRET=$(kubectl get secrets | grep kubelet-api-test | awk '{print $1}')

[root@kube-master ~]# TOKEN=$(kubectl describe secret ${SECRET} | grep -E '^token' | awk '{print $2}')

[root@kube-master ~]# curl -s --cacert /opt/k8s/cert/ca.pem -H "Authorization: Bearer ${TOKEN}" https://192.168.10.109:10250/metrics|head

# HELP apiserver_client_certificate_expiration_seconds Distribution of the remaining lifetime on the certificate used to authenticate a request.

# TYPE apiserver_client_certificate_expiration_seconds histogram

apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0

apiserver_client_certificate_expiration_seconds_bucket{le="21600"} 0

apiserver_client_certificate_expiration_seconds_bucket{le="43200"} 0

apiserver_client_certificate_expiration_seconds_bucket{le="86400"} 0

apiserver_client_certificate_expiration_seconds_bucket{le="172800"} 0

apiserver_client_certificate_expiration_seconds_bucket{le="345600"} 0

apiserver_client_certificate_expiration_seconds_bucket{le="604800"} 0

apiserver_client_certificate_expiration_seconds_bucket{le="2.592e+06"} 0

?

07-02-11 cadvisor 和 metrics

  cadvisor 統(tǒng)計(jì)所在節(jié)點(diǎn)各容器的資源(CPU、內(nèi)存、磁盤、網(wǎng)卡)使用情況,分別在自己的 http web 頁面(4194 端口)和 10250 以 promehteus metrics 的形式輸出。

瀏覽器訪問?http://192.168.10.108:4194/containers/?可以查看到 cadvisor 的監(jiān)控頁面:

?

07-02-12 獲取 kublet 的配置

從 kube-apiserver 獲取各 node 的配置:

使用部署 kubectl 命令行工具時(shí)創(chuàng)建的、具有最高權(quán)限的 admin 證書;

[root@kube-master ~]#?curl -sSL --cacert /opt/k8s/cert/ca.pem --cert /opt/k8s/cert/admin.pem --key /opt/k8s/cert/admin-key.pem https://192.168.10.10:8443/api/v1/nodes/kube-node1/proxy/configz | jq \

'.kubeletconfig|.kind="KubeletConfiguration"|.apiVersion="kubelet.config.k8s.io/v1beta1"'

{

??"syncFrequency": "1m0s",

??"fileCheckFrequency": "20s",

??"httpCheckFrequency": "20s",

??"address": "192.168.10.109",

??"port": 10250,

??"authentication": {

????"x509": {

??????"clientCAFile": "/opt/k8s/cert/ca.pem"

????},

????"webhook": {

??????"enabled": true,

??????"cacheTTL": "2m0s"

????},

????"anonymous": {

??????"enabled": false

????}

??},

??"authorization": {

????"mode": "Webhook",

????"webhook": {

??????"cacheAuthorizedTTL": "5m0s",

??????"cacheUnauthorizedTTL": "30s"

????}

??},

??"registryPullQPS": 5,

??"registryBurst": 10,

??"eventRecordQPS": 5,

??"eventBurst": 10,

??"enableDebuggingHandlers": true,

??"healthzPort": 10248,

??"healthzBindAddress": "127.0.0.1",

??"oomScoreAdj": -999,

??"clusterDomain": "cluster.local.",

??"clusterDNS": [

????"10.96.0.2"

??],

??"streamingConnectionIdleTimeout": "4h0m0s",

??"nodeStatusUpdateFrequency": "10s",

??"imageMinimumGCAge": "2m0s",

??"imageGCHighThresholdPercent": 85,

??"imageGCLowThresholdPercent": 80,

??"volumeStatsAggPeriod": "1m0s",

??"cgroupsPerQOS": true,

??"cgroupDriver": "cgroupfs",

??"cpuManagerPolicy": "none",

??"cpuManagerReconcilePeriod": "10s",

??"runtimeRequestTimeout": "2m0s",

??"hairpinMode": "promiscuous-bridge",

??"maxPods": 110,

??"podPidsLimit": -1,

??"resolvConf": "/etc/resolv.conf",

??"cpuCFSQuota": true,

??"maxOpenFiles": 1000000,

??"contentType": "application/vnd.kubernetes.protobuf",

??"kubeAPIQPS": 5,

??"kubeAPIBurst": 10,

??"serializeImagePulls": false,

??"evictionHard": {

????"imagefs.available": "15%",

????"memory.available": "100Mi",

????"nodefs.available": "10%",

????"nodefs.inodesFree": "5%"

??},

??"evictionPressureTransitionPeriod": "5m0s",

??"enableControllerAttachDetach": true,

??"makeIPTablesUtilChains": true,

??"iptablesMasqueradeBit": 14,

??"iptablesDropBit": 15,

??"featureGates": {

????"RotateKubeletClientCertificate": true,

????"RotateKubeletServerCertificate": true

??},

??"failSwapOn": true,

??"containerLogMaxSize": "10Mi",

??"containerLogMaxFiles": 5,

??"enforceNodeAllocatable": [

????"pods"

??],

??"kind": "KubeletConfiguration",

??"apiVersion": "kubelet.config.k8s.io/v1beta1"

}

?

07-03.部署 kube-proxy 組件

  kube-proxy 運(yùn)行在所有 worker 節(jié)點(diǎn)上,,它監(jiān)聽 apiserver 中 service 和 Endpoint 的變化情況,創(chuàng)建路由規(guī)則來進(jìn)行服務(wù)負(fù)載均衡。

  本文檔講解部署 kube-proxy 的部署,使用 ipvs 模式。

?

1、下載和分發(fā) kube-proxy 二進(jìn)制文件

參考?06.部署master節(jié)點(diǎn).md

?

2、安裝依賴包

各節(jié)點(diǎn)需要安裝?ipvsadm?和?ipset?命令,加載?ip_vs?內(nèi)核模塊。

參考?07.部署worker節(jié)點(diǎn).md

?

07-03-01 創(chuàng)建 kube-proxy 證書

創(chuàng)建證書簽名請求:

[root@kube-master ~]# cd /opt/k8s/cert/

[root@kube-master cert]# cat > kube-proxy-csr.json << EOF

{"CN": "system:kube-proxy","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "BeiJing","L": "BeiJing","O": "k8s","OU": "4Paradigm"}] }

EOF

注:

  • CN:指定該證書的 User 為?system:kube-proxy;
  • 預(yù)定義的 RoleBinding?system:node-proxier?將User?system:kube-proxy?與 Role?system:node-proxier?綁定,該 Role 授予了調(diào)用?kube-apiserver?Proxy 相關(guān) API 的權(quán)限;
  • 該證書只會(huì)被 kube-proxy 當(dāng)做 client 證書使用,所以 hosts 字段為空;

?

07-03-02 生成證書和私鑰

[root@kube-master cert]#? cfssl gencert -ca=/opt/k8s/cert/ca.pem \

-ca-key=/opt/k8s/cert/ca-key.pem \

-config=/opt/k8s/cert/ca-config.json \

-profile=kubernetes kube-proxy-csr.json | cfssljson_linux-amd64 -bare kube-proxy

?

?

[root@kube-master cert]# ls *kube-proxy*

kube-proxy.csr kube-proxy-csr.json kube-proxy-key.pem kube-proxy.pem

?

07-03-03 創(chuàng)建kubeconfig 文件

[root@kube-master ~]# kubectl config set-cluster kubernetes \

--certificate-authority=/opt/k8s/cert/ca.pem \

--embed-certs=true \

--server=https://192.168.10.10:8443 \

--kubeconfig=/root/.kube/kube-proxy.kubeconfig

?

[root@kube-master ~]# kubectl config set-credentials kube-proxy \

--client-certificate=/opt/k8s/cert/kube-proxy.pem \

--client-key=/opt/k8s/cert/kube-proxy-key.pem \

--embed-certs=true \

--kubeconfig=/root/.kube/kube-proxy.kubeconfig

?

[root@kube-master ~]# kubectl config set-context kube-proxy@kubernetes \

--cluster=kubernetes \

--user=kube-proxy \

--kubeconfig=/root/.kube/kube-proxy.kubeconfig

?

[root@kube-master ~]# kubectl config use-context kube-proxy@kubernetes --kubeconfig=/root/.kube/kube-proxy.kubeconfig

注:

  • --embed-certs=true:將 ca.pem 和 admin.pem 證書內(nèi)容嵌入到生成的 kubectl-proxy.kubeconfig 文件中(不加時(shí),寫入的是證書文件路徑);

?

[root@kube-master ~]# kubectl config view --kubeconfig=/root/.kube/kube-proxy.kubeconfig

apiVersion: v1 clusters: - cluster:certificate-authority-data: REDACTEDserver: https://192.168.10.10:8443name: kubernetes contexts: - context:cluster: kubernetesuser: kube-proxyname: kube-proxy@kubernetes current-context: kube-proxy@kubernetes kind: Config preferences: {} users: - name: kube-proxyuser:client-certificate-data: REDACTEDclient-key-data: REDACTED

?

07-03-04 創(chuàng)建 kube-proxy 配置文件

  從 v1.10 開始,kube-proxy?部分參數(shù)可以配置文件中配置。可以使用?--write-config-to?選項(xiàng)生成該配置文件,

創(chuàng)建 kube-proxy config 文件模板

[root@kube-master ~]# mkdir /opt/kube-proxy

[root@kube-master ~]# cd /opt/kube-proxy

[root@kube-master kube-proxy]# cat >kube-proxy.config.yaml.template <<EOF

apiVersion: kubeproxy.config.k8s.io/v1alpha1 bindAddress: ##NODE_IP## clientConnection:kubeconfig: /opt/k8s/kube-proxy.kubeconfig clusterCIDR: 10.96.0.0/16 healthzBindAddress: ##NODE_IP##:10256 hostnameOverride: ##NODE_NAME## kind: KubeProxyConfiguration metricsBindAddress: ##NODE_IP##:10249 mode: "ipvs"

EOF

注:

  • bindAddress: 監(jiān)聽地址;
  • clientConnection.kubeconfig: 連接 apiserver 的 kubeconfig 文件;
  • clusterCIDR: kube-proxy 根據(jù)?--cluster-cidr?判斷集群內(nèi)部和外部流量,指定?--cluster-cidr?或?--masquerade-all選項(xiàng)后 kube-proxy 才會(huì)對訪問 Service IP 的請求做 SNAT;
  • hostnameOverride: 參數(shù)值必須與 kubelet 的值一致,否則 kube-proxy 啟動(dòng)后會(huì)找不到該 Node,從而不會(huì)創(chuàng)建任何 ipvs 規(guī)則;
  • mode: 使用 ipvs 模式;

?

07-03-05 分發(fā) kubeconfig、kube-proxy systemd unit 文件;啟動(dòng)并檢查kube-proxy 服務(wù)

[root@kube-master ~]# vim /opt/k8s/script/kube_proxy.sh

NODE_IPS=("192.168.10.108" "192.168.10.109" "192.168.10.110") NODE_NAMES=("kube-master" "kube-node1" "kube-node2")for (( i=0; i < 3; i++ ));do echo ">>> ${NODE_NAMES[i]}"sed -e "s/##NODE_NAME##/${NODE_NAMES[i]}/" -e "s/##NODE_IP##/${NODE_IPS[i]}/" /opt/kube-proxy/kube-proxy.config.yaml.template > /opt/kube-proxy/kube-proxy-${NODE_NAMES[i]}.config.yamlscp /opt/kube-proxy/kube-proxy-${NODE_NAMES[i]}.config.yaml root@${NODE_NAMES[i]}:/opt/k8s/kube-proxy.config.yaml donefor node_ip in ${NODE_IPS[@]};doecho ">>> ${node_ip}"scp /root/.kube/kube-proxy.kubeconfig k8s@${node_ip}:/opt/k8s/scp /opt/kube-proxy/kube-proxy.service root@${node_ip}:/etc/systemd/system/ssh root@${node_ip} "mkdir -p /opt/lib/kube-proxy"ssh root@${node_ip} "mkdir -p /opt/log/kubernetes && chown -R k8s /var/log/kubernetes"ssh root@${node_ip} "systemctl daemon-reload && systemctl enable kube-proxy && systemctl restart kube-proxy"ssh k8s@${node_ip} "systemctl status kube-proxy|grep Active" done

[root@kube-master ~]# chmod +x /opt/k8s/script/kube_proxy.sh && /opt/k8s/script/kube_proxy.sh

?

07-03-06 查看監(jiān)聽端口和 metrics

[root@kube-master ~]# ss -nutlp |grep kube-prox

tcp LISTEN 0 128 192.168.10.108:10256 *:* users:(("kube-proxy",pid=34230,fd=10))

tcp LISTEN 0 128 192.168.10.108:10249 *:* users:(("kube-proxy",pid=34230,fd=11))

  • 10249:http prometheus metrics port;
  • 10256:http healthz port;

?

07-03-07 查看 ipvs 路由規(guī)則

[root@kube-master ~]# /usr/sbin/ipvsadm -ln

IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

-> RemoteAddress:Port Forward Weight ActiveConn InActConn

TCP 10.96.0.1:443 rr persistent 10800

-> 192.168.10.108:6443 Masq 1 0 0

-> 192.168.10.109:6443 Masq 1 0 0

-> 192.168.10.110:6443 Masq 1 0 0

可見將所有到 kubernetes cluster ip 443 端口的請求都轉(zhuǎn)發(fā)到 kube-apiserver 的 6443 端口;

?

08.驗(yàn)證集群功能

本文檔使用 daemonset 驗(yàn)證 master 和 worker 節(jié)點(diǎn)是否工作正常。

08-01 檢查節(jié)點(diǎn)狀態(tài)

[root@kube-master ~]# kubectl get nodes

NAME STATUS ROLES AGE VERSION

kube-master Ready <none> 21h v1.10.4

kube-node1 Ready <none> 21h v1.10.4

kube-node2 Ready <none> 21h v1.10.4

都為 Ready 時(shí)正常。

?

08-02 創(chuàng)建測試文件

[root@kube-master ~]# mkdir /opt/k8s/damo

[root@kube-master ~]# cat > nginx-ds.yml <<EOF

apiVersion: v1 kind: Service metadata:name: nginx-dslabels:app: nginx-ds spec:type: NodePortselector:app: nginx-dsports:- name: httpport: 80targetPort: 80 --- apiVersion: extensions/v1beta1 kind: DaemonSet metadata:name: nginx-dslabels:addonmanager.kubernetes.io/mode: Reconcile spec:template:metadata:labels:app: nginx-dsspec:containers:- name: my-nginximage: nginx:1.7.9ports:- containerPort: 80

EOF

?

執(zhí)行定義文件

[root@kube-master ~]# kubectl create -f /opt/k8s/damo/nginx-ds.yml

service "nginx-ds" created

daemonset.extensions "nginx-ds" created

?

08-03 檢查各 Node 上的 Pod IP 連通性

因?yàn)樾枰侠R像、創(chuàng)建Pod,所以需要等一段時(shí)間

[root@kube-master ~]# kubectl get pods -o wide|grep nginx-ds

nginx-ds-7cz4p 1/1 Running 0 4m 10.30.22.2 kube-master

nginx-ds-lg585 1/1 Running 0 4m 10.30.44.2 kube-node2

nginx-ds-zc448 1/1 Running 0 4m 10.30.33.2 kube-node1

可見,nginx-ds 的 Pod IP 分別是?10.30.22.2、10.30.44.2、10.30.33.2,在所有 Node 上分別 ping 這三個(gè) IP,看是否連通:

[root@kube-master ~]# NODE_IPS=("192.168.10.108" "192.168.10.109" "192.168.10.110");\

[root@kube-master ~]# for node_ip in ${NODE_IPS[@]};do \ echo ">>> ${node_ip}" ;\ ssh ${node_ip} "ping -c 1 10.30.22.2"; \ ssh ${node_ip} "ping -c 1 10.30.44.2"; \ ssh ${node_ip} "ping -c 1 10.30.33.2"; \ done

?

08-04 檢查服務(wù) IP 和端口可達(dá)性

[root@kube-master ~]# kubectl get svc |grep nginx-ds

nginx-ds NodePort 10.96.192.157 <none> 80:15131/TCP 9m

可見:

  • Service Cluster IP:10.96.192.157
  • 服務(wù)端口:80
  • NodePort 端口:15131

在所有 Node 上 curl Service IP:

[root@kube-master ~]# curl 10.96.192.157

[root@kube-node1 ~]# curl 10.96.192.157

[root@kube-node2 ~]# curl 10.96.192.157

預(yù)期輸出 nginx 歡迎頁面內(nèi)容。

?

08-05 檢查服務(wù)的 NodePort 可達(dá)性

在所有 Node 上執(zhí)行:預(yù)期輸出 nginx 歡迎頁面內(nèi)容。

[root@kube-master ~]# curl 192.168.10.108:15131

[root@kube-master ~]# curl 192.168.10.109:15131

[root@kube-master ~]# curl 192.168.10.110:15131

<!DOCTYPE html>

<html>

<head>

<title>Welcome to nginx!</title>

<style>

????body {

????????width: 35em;

????????margin: 0 auto;

????????font-family: Tahoma, Verdana, Arial, sans-serif;

????}

</style>

</head>

<body>

<h1>Welcome to nginx!</h1>

<p>If you see this page, the nginx web server is successfully installed and

working. Further configuration is required.</p>

?

<p>For online documentation and support please refer to

<a href="http://nginx.org/">nginx.org</a>.<br/>

Commercial support is available at

<a href="http://nginx.com/">nginx.com</a>.</p>

?

<p><em>Thank you for using nginx.</em></p>

</body>

</html>

?

轉(zhuǎn)載于:https://www.cnblogs.com/dengbingbing/p/10399217.html

總結(jié)

以上是生活随笔為你收集整理的二进制安装部署 4 kubernetes集群---超详细教程的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。