【体系】Kubernetes容器管理
目錄
- 1. 基本概念
- 2. 搭建集群
- 3. 核心技術(shù)(上)
- 4. 核心技術(shù)(下)
- 5. 日志管理
- 6. 監(jiān)控平臺(tái)
- 7. 搭建高可用集群
- 8. 集群項(xiàng)目部署實(shí)操
1. 基本概念
概述與特性:k8s是谷歌在2014年開(kāi)源的容器化集群管理系統(tǒng),主要用于容器應(yīng)用部署,具有易于應(yīng)用擴(kuò)展的特點(diǎn),讓容器化應(yīng)用部署更加簡(jiǎn)潔高效。傳統(tǒng)的應(yīng)用部署方式是通過(guò)插件或腳本來(lái)安裝應(yīng)用。這樣做的缺點(diǎn)是應(yīng)用的運(yùn)行、配 置、管理、所有生存周期將與當(dāng)前操作系統(tǒng)綁定,這樣做并不利于應(yīng)用的升級(jí)更新/回滾等 操作,當(dāng)然也可以通過(guò)創(chuàng)建虛擬機(jī)的方式來(lái)實(shí)現(xiàn)某些功能,但是虛擬機(jī)非常重,并不利于可移植性;新的方式是通過(guò)部署容器方式實(shí)現(xiàn),每個(gè)容器之間互相隔離,每個(gè)容器有自己的文件 系統(tǒng) ,容器之間進(jìn)程不會(huì)相互影響,能區(qū)分計(jì)算資源。相對(duì)于虛擬機(jī),容器能快速部署, 由于容器與底層設(shè)施、機(jī)器文件系統(tǒng)解耦的,所以它能在不同云、不同版本操作系統(tǒng)間進(jìn)行遷移。
架構(gòu)組件:k8s具有自動(dòng)裝箱、自我修復(fù)(自愈能力)、水平擴(kuò)展、服務(wù)發(fā)現(xiàn)、滾動(dòng)更新、版本回退、密鑰和配置管理、存儲(chǔ)編排和批處理等功能特點(diǎn)
- Master Node:k8s 集群控制節(jié)點(diǎn),對(duì)集群進(jìn)行調(diào)度管理,接受集群外用戶(hù)去集群操作請(qǐng)求; Master Node 由 API Server、Scheduler、ClusterState Store(ETCD 數(shù)據(jù)庫(kù))和 Controller MangerServer 所組成
- apiserver:集群統(tǒng)一入口,以restfull方式,交給etcd存儲(chǔ)
- scheduler:節(jié)點(diǎn)調(diào)度,選擇node節(jié)點(diǎn)應(yīng)用部署
- controller-manager:處理集群中常規(guī)后臺(tái)任務(wù),一個(gè)資源對(duì)應(yīng)一個(gè)控制器
- etcd:存儲(chǔ)系統(tǒng),保存集群相關(guān)的數(shù)據(jù)
- Worker Node:集群工作節(jié)點(diǎn),運(yùn)行用戶(hù)業(yè)務(wù)應(yīng)用容器,包含 kubelet、kube proxy 和 ContainerRuntime
- kubelet:master排到node節(jié)點(diǎn)代表,管理本機(jī)容器
- kube-proxy:提供網(wǎng)絡(luò)代理,負(fù)載均衡等操作
核心概念:pod、controller、service
- pod:最小部署單元;一組容器的集合;共享網(wǎng)絡(luò);生命周期是短暫的
- controller:確保預(yù)期的pod副本數(shù)量;無(wú)狀態(tài)應(yīng)用部署;有狀態(tài)應(yīng)用部署;確保所有的node運(yùn)行同一個(gè)pod;一次性任務(wù)和定時(shí)任務(wù)
- service:定義一組pod訪(fǎng)問(wèn)規(guī)則
2. 搭建集群
k8s硬件要求:測(cè)試環(huán)境中master(2核 4G內(nèi)存 20G硬盤(pán))和node(4核 8G內(nèi)存 40G硬盤(pán));生產(chǎn)環(huán)境中master和node均有更高要求
搭建集群方式:kubeadm方式(Kubeadm 是一個(gè) K8s 部署工具,提供 kubeadm init 和 kubeadm join,用于快速部 署 Kubernetes 集群。官方地址,見(jiàn)鏈接)和二進(jìn)制包方式(從 github 下載發(fā)行版的二進(jìn)制包,手動(dòng)部署每個(gè)組件,組成 Kubernetes 集群)
k8s平臺(tái)規(guī)劃:環(huán)境平臺(tái)規(guī)劃分為單master集群和多master集群(常用,避免單master故障導(dǎo)致系統(tǒng)崩潰)
平臺(tái)搭建1:單master集群的kubeadm方式安裝
- 準(zhǔn)備虛擬主機(jī)(記得做好快照,以便環(huán)境快速恢復(fù))。網(wǎng)絡(luò)要求:集群機(jī)器之間能夠互通,且能夠上外網(wǎng)(操作系統(tǒng)此例為CentOS7)
- 三個(gè)虛擬機(jī)的初始化(記得做好快照,以便環(huán)境快速恢復(fù))# 1.三臺(tái)虛擬機(jī)均關(guān)閉防火墻: systemctl stop firewalld #臨時(shí) systemctl disable firewalld #永久# 2.三臺(tái)虛擬機(jī)均關(guān)閉selinux setenforce 0 #臨時(shí) sed -i 's/enforcing/disabled/' /etc/selinux/config #永久# 3.三臺(tái)虛擬機(jī)均關(guān)閉swap分區(qū) swapoff -a #臨時(shí) sed -ri 's/.*swap.*/#&/' /etc/fstab #永久# 4.三臺(tái)虛擬機(jī)分別設(shè)置主機(jī)名 hostnamectl set-hostname k8s-master# 5.在master添加 hosts [root@k8s-master ~]# cat >> /etc/hosts << EOF > 172.16.90.146 k8s-master > 172.16.90.145 k8s-node1 > 172.16.90.144 k8s-node2 > EOF# 6. 將三臺(tái)虛擬機(jī)橋接的IPv4流量傳遞到iptables的鏈,并使之生效 [root@k8s-master ~]# cat > /etc/sysctl.d/k8s.conf << EOF > net.bridge.bridge-nf-call-ip6tables = 1 > net.bridge.bridge-nf-call-iptables = 1 > EOF [root@k8s-master ~]# sysctl --system# 7. 三臺(tái)虛擬機(jī)均設(shè)置時(shí)間同步 yum install ntpdate -y ntpdate time.windows.com
- 所有節(jié)點(diǎn)安裝 Docker/kubeadm/kubelet# 安裝wget [root@k8s-master ~]# yum install wget# 下載docker源文件 [root@k8s-master ~]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo# 安裝docker [root@k8s-master ~]# yum -y install docker-ce-18.06.1.ce-3.el7# 啟動(dòng)docker [root@k8s-master ~]# systemctl enable docker && systemctl start docker# 檢查是否安裝成功 [root@k8s-master ~]# docker --version# 添加阿里云YUM軟件源 # 設(shè)置倉(cāng)庫(kù)地址 [root@k8s-master ~]# cat > /etc/docker/daemon.json << EOF > { > "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"] > } > EOF# 添加yum源 [root@k8s-master ~]# cat > /etc/yum.repos.d/kubernetes.repo << EOF > [kubernetes] > name=Kubernetes > baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 > enabled=1 > gpgcheck=0 > repo_gpgcheck=0 > gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg > https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg > EOF# 安裝 kubeadm,kubelet 和 kubectl [root@k8s-master ~]# yum install -y kubelet kubeadm kubectl# 設(shè)置開(kāi)機(jī)啟動(dòng) [root@k8s-master ~]# systemctl enable kubelet
- 在master節(jié)點(diǎn)執(zhí)行kubeadm init命令進(jìn)行初始化# 在master節(jié)點(diǎn)機(jī)部署Kubernetes Master,并執(zhí)行一下目錄 # 由于默認(rèn)拉取鏡像地址 k8s.gcr.io 國(guó)內(nèi)無(wú)法訪(fǎng)問(wèn),這里指定阿里云鏡像倉(cāng)庫(kù)地址。 [root@k8s-master ~]# kubeadm init --apiserver-advertise-address=172.16.90.146 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.21.0 --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16 ... error execution phase preflight: [preflight] Some fatal errors occurred:[ERROR ImagePull]: failed to pull image registry.aliyuncs.com/google_containers/coredns:v1.8.0: output: Error response from daemon: manifest for registry.aliyuncs.com/google_containers/coredns:v1.8.0 not found , error: exit status 1 ...# 檢查發(fā)現(xiàn)缺少鏡像coredns [root@k8s-master ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE registry.aliyuncs.com/google_containers/kube-apiserver v1.21.0 4d217480042e 3 months ago 126MB registry.aliyuncs.com/google_containers/kube-proxy v1.21.0 38ddd85fe90e 3 months ago 122MB registry.aliyuncs.com/google_containers/kube-controller-manager v1.21.0 09708983cc37 3 months ago 120MB registry.aliyuncs.com/google_containers/kube-scheduler v1.21.0 62ad3129eca8 3 months ago 50.6MB registry.aliyuncs.com/google_containers/pause 3.4.1 0f8457a4c2ec 6 months ago 683kB registry.aliyuncs.com/google_containers/etcd 3.4.13-0 0369cf4303ff 11 months ago 253MB# 解決該問(wèn)題:Kubernetes 需要的是 registry.aliyuncs.com/google_containers/coredns:v1.8.0 這個(gè)鏡像,使用 docker tag 命令重命名 # 拉取鏡像 [root@k8s-master ~]# docker pull registry.aliyuncs.com/google_containers/coredns:1.8.0 # 重命名 [root@k8s-master ~]# docker tag registry.aliyuncs.com/google_containers/coredns:1.8.0 registry.aliyuncs.com/google_containers/coredns:v1.8.0 # 刪除原有鏡像 [root@k8s-master ~]# docker rmi registry.aliyuncs.com/google_containers/coredns:1.8.0# 檢查 [root@k8s-master ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE registry.aliyuncs.com/google_containers/kube-apiserver v1.21.0 4d217480042e 3 months ago 126MB registry.aliyuncs.com/google_containers/kube-proxy v1.21.0 38ddd85fe90e 3 months ago 122MB registry.aliyuncs.com/google_containers/kube-controller-manager v1.21.0 09708983cc37 3 months ago 120MB registry.aliyuncs.com/google_containers/kube-scheduler v1.21.0 62ad3129eca8 3 months ago 50.6MB registry.aliyuncs.com/google_containers/pause 3.4.1 0f8457a4c2ec 6 months ago 683kB registry.aliyuncs.com/google_containers/coredns v1.8.0 296a6d5035e2 9 months ago 42.5MB registry.aliyuncs.com/google_containers/etcd 3.4.13-0 0369cf4303ff 11 months ago 253MB# 再次執(zhí)行kubeadm init ... Your Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config ... kubeadm join 172.16.90.146:6443 --token y761gh.vxrkrulwu0tt74sw \--discovery-token-ca-cert-hash sha256:b611e2e88052ec60ac4716b0a9a48a9fa45d99a4b457563593dc29805214bbc5 # 使用 kubectl 工具: [root@k8s-master ~]# mkdir -p $HOME/.kube [root@k8s-master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config [root@k8s-master ~]# chown $(id -u):$(id -g) $HOME/.kube/config [root@k8s-master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master NotReady control-plane,master 3m25s v1.21.3
- 在node節(jié)點(diǎn)上執(zhí)行kubeadm join命令把node節(jié)點(diǎn)添加到當(dāng)前集群里面[root@k8s-node1 ~]# kubeadm join 172.16.90.146:6443 --token y761gh.vxrkrulwu0tt74sw --discovery-token-ca-cert-hash sha256:b611e2e88052ec60ac4716b0a9a48a9fa45d99a4b457563593dc29805214bbc5 [root@k8s-node2 ~]# kubeadm join 172.16.90.146:6443 --token y761gh.vxrkrulwu0tt74sw --discovery-token-ca-cert-hash sha256:b611e2e88052ec60ac4716b0a9a48a9fa45d99a4b457563593dc29805214bbc5# 檢查是否添加成功 [root@k8s-master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master NotReady control-plane,master 11m v1.21.3 k8s-node1 NotReady <none> 2m6s v1.21.3 k8s-node2 NotReady <none> 60s v1.21.3
- 配置網(wǎng)絡(luò)插件[root@k8s-master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml# 查看是否有運(yùn)行 [root@k8s-master ~]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-59d64cd4d4-dqx8m 1/1 Running 0 31m coredns-59d64cd4d4-z8pdq 1/1 Running 0 31m etcd-k8s-master 1/1 Running 0 31m kube-apiserver-k8s-master 1/1 Running 0 31m kube-controller-manager-k8s-master 1/1 Running 0 31m kube-flannel-ds-h7v2g 1/1 Running 0 2m46s kube-flannel-ds-xmzfh 1/1 Running 0 2m46s kube-flannel-ds-z9nbj 1/1 Running 0 2m46s kube-proxy-6c9cd 1/1 Running 0 20m kube-proxy-cnvfg 1/1 Running 0 31m kube-proxy-p4nx4 1/1 Running 0 22m kube-scheduler-k8s-master 1/1 Running 0 31m# 驗(yàn)證是否啟動(dòng) [root@k8s-master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master Ready control-plane,master 32m v1.21.3 k8s-node1 Ready <none> 23m v1.21.3 k8s-node2 Ready <none> 22m v1.21.3
- 測(cè)試 kubernetes 集群# 在 Kubernetes 集群中創(chuàng)建一個(gè) pod,驗(yàn)證是否正常運(yùn)行 # 聯(lián)網(wǎng)下載nginx鏡像 [root@k8s-master ~]# kubectl create deployment nginx --image=nginx deployment.apps/nginx created # 查看pod狀態(tài) [root@k8s-master ~]# kubectl get pod NAME READY STATUS RESTARTS AGE nginx-6799fc88d8-lj24f 1/1 Running 0 62s # 對(duì)外暴露80端口 [root@k8s-master ~]# kubectl expose deployment nginx --port=80 --type=NodePort service/nginx exposed # 查看對(duì)外端口 [root@k8s-master ~]# kubectl get pod,svc NAME READY STATUS RESTARTS AGE pod/nginx-6799fc88d8-lj24f 1/1 Running 0 3m4sNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 39m service/nginx NodePort 10.105.227.129 <none> 80:30640/TCP 20s# 訪(fǎng)問(wèn)地址:http://NodeIP:Port(NodeIP為任意節(jié)點(diǎn)ip,port為暴露出來(lái)的端口號(hào))
平臺(tái)搭建2:單master集群的二進(jìn)制方式安裝
- 準(zhǔn)備虛擬主機(jī)(記得做好快照,以便環(huán)境快速恢復(fù))。網(wǎng)絡(luò)要求:集群機(jī)器之間能夠互通,且能夠上外網(wǎng)(操作系統(tǒng)此例為CentOS7)
角色I(xiàn)P組件 k8s-master 172.16.90.147 kube-apiserver,kube-controller-manager,kube -scheduler,etcd k8s-node1 172.16.90.148 kubelet,kube-proxy,docker etcd - 兩個(gè)虛擬機(jī)的初始化(記得做好快照,以便環(huán)境快速恢復(fù))# 關(guān)閉防火墻 systemctl stop firewalld systemctl disable firewalld# 關(guān)閉 selinux sed -i 's/enforcing/disabled/' /etc/selinux/config # 永久 setenforce 0 # 臨時(shí)# 關(guān)閉 swap swapoff -a # 臨時(shí) sed -ri 's/.*swap.*/#&/' /etc/fstab # 永久# 根據(jù)規(guī)劃設(shè)置主機(jī)名 hostnamectl set-hostname <hostname># 在 master 添加 hosts cat >> /etc/hosts << EOF 172.16.90.147 m1 172.16.90.148 n1 EOF# 將橋接的 IPv4 流量傳遞到 iptables 的鏈 cat > /etc/sysctl.d/k8s.conf << EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF# 生效 sysctl --system # 時(shí)間同步 yum install ntpdate -y ntpdate time.windows.com
- 為etcd自簽證書(shū)# 準(zhǔn)備 cfssl 證書(shū)生成工具 # cfssl 是一個(gè)開(kāi)源的證書(shū)管理工具,使用 json 文件生成證書(shū),相比 openssl 更方便使用。 找任意一臺(tái)服務(wù)器操作,這里用 Master 節(jié)點(diǎn) # 下載 wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64 mv cfssl_linux-amd64 /usr/local/bin/cfssl mv cfssljson_linux-amd64 /usr/local/bin/cfssljson mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo# 生成 Etcd 證書(shū) # 自簽證書(shū)頒發(fā)機(jī)構(gòu)(CA) # 創(chuàng)建工作目錄 [root@m1 TSL]# mkdir -p ~/TLS/{etcd,k8s} [root@m1 TSL]# cd TLS/etcd# 自簽CA [root@m1 TSL]# cat > ca-config.json<< EOF {"signing": { "default": { "expiry": "87600h" },"profiles": { "www": { "expiry": "87600h", "usages": ["signing", "key encipherment", "server auth", "client auth"] } } } } EOF [root@m1 TSL]# cat > ca-csr.json<< EOF { "CN": "etcd CA", "key": { "algo": "rsa", "size": 2048 },"names": [{ "C": "CN", "L": "Beijing", "ST": "Beijing"}] } EOF# 生成證書(shū) [root@m1 etcd]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca - [root@m1 etcd]# ls *pem ca-key.pem ca.pem# 使用自簽 CA 簽發(fā) Etcd HTTPS 證書(shū) # 創(chuàng)建證書(shū)申請(qǐng)文件 [root@m1 etcd]# cat > server-csr.json<< EOF {"CN": "etcd","hosts": ["172.16.90.147","172.16.90.148"],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "BeiJing","ST": "BeiJing"}] } EOF # 注意:上述文件 hosts 字段中 IP 為所有 etcd 節(jié)點(diǎn)的集群內(nèi)部通信 IP,一個(gè)都不能少!為了 方便后期擴(kuò)容可以多寫(xiě)幾個(gè)預(yù)留的 IP。# 生成證書(shū) [root@m1 etcd]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server#查看結(jié)果 [root@m1 etcd]# ls server*pem server-key.pem server.pem
- 部署etcd集群# 以下在節(jié)點(diǎn) 1 上操作,為簡(jiǎn)化操作,待會(huì)將節(jié)點(diǎn) 1 生成的所有文件拷貝到節(jié)點(diǎn) 2 和節(jié)點(diǎn) 3. # 創(chuàng)建工作目錄并解壓二進(jìn)制包 [root@m1 ~]# mkdir -p /opt/etcd/{bin,cfg,ssl} [root@m1 ~]# tar zxvf etcd-v3.4.9-linux-amd64.tar.gz [root@m1 ~]# mv etcd-v3.4.9-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/ [root@m1 opt]# tree etcd etcd ├── bin │ ├── etcd │ └── etcdctl ├── cfg └── ssl# 創(chuàng)建etcd配置文件 [root@m1 ~]# cat > /opt/etcd/cfg/etcd.conf << EOF #[Member] #ETCD_NAME:節(jié)點(diǎn)名稱(chēng),集群中唯一 ETCD_NAME="etcd-1" #ETCD_DATA_DIR:數(shù)據(jù)目錄 ETCD_DATA_DIR="/var/lib/etcd/default.etcd" #ETCD_LISTEN_PEER_URLS:集群通信監(jiān)聽(tīng)地址 ETCD_LISTEN_PEER_URLS="https://172.16.90.147:2380" #ETCD_LISTEN_CLIENT_URLS:客戶(hù)端訪(fǎng)問(wèn)監(jiān)聽(tīng)地址 ETCD_LISTEN_CLIENT_URLS="https://172.16.90.147:2379" #[Clustering] #ETCD_INITIAL_ADVERTISE_PEER_URLS:集群通告地址 ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.16.90.147:2380" #ETCD_ADVERTISE_CLIENT_URLS:客戶(hù)端通告地址 ETCD_ADVERTISE_CLIENT_URLS="https://172.16.90.147:2379" #ETCD_INITIAL_CLUSTER:集群節(jié)點(diǎn)地址 ETCD_INITIAL_CLUSTER="etcd-1=https://172.16.90.147:2380,etcd-2=https://172.16.90.148:2380" #ETCD_INITIAL_CLUSTER_TOKEN:集群 Token ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" #ETCD_INITIAL_CLUSTER_STATE:加入集群的當(dāng)前狀態(tài),new 是新集群,existing 表示加入 已有集群 ETCD_INITIAL_CLUSTER_STATE="new"#systemd 管理 etcd [root@m1 /]# cat > /usr/lib/systemd/system/etcd.service << EOF [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target [Service] Type=notify EnvironmentFile=/opt/etcd/cfg/etcd.conf ExecStart=/opt/etcd/bin/etcd \ --cert-file=/opt/etcd/ssl/server.pem \ --key-file=/opt/etcd/ssl/server-key.pem \ --peer-cert-file=/opt/etcd/ssl/server.pem \ --peer-key-file=/opt/etcd/ssl/server-key.pem \ --trusted-ca-file=/opt/etcd/ssl/ca.pem \ --peer-trusted-ca-file=/opt/etcd/ssl/ca.pem \ --logger=zap Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF# 筆者再啟動(dòng)etcd服務(wù)時(shí)報(bào)錯(cuò),經(jīng)排查因?yàn)?\ 引發(fā),改后可行 [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target [Service] Type=notify EnvironmentFile=/opt/etcd/cfg/etcd.conf ExecStart=/opt/etcd/bin/etcd --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --peer-cert-file=/opt/etcd/ssl/server.pem --peer-key-file=/opt/etcd/ssl/server-key.pem --trusted-ca-file=/opt/etcd/ssl/ca.pem --peer-trusted-ca-file=/opt/etcd/ssl/ca.pem --logger=zap Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target# 拷貝剛才生成的證書(shū) [root@m1 ssl]# cp ~/TLS/etcd/ca*pem ~/TLS/etcd/server*pem /opt/etcd/ssl/ [root@m1 ssl]# ls ca-key.pem ca.pem server-key.pem server.pem# 將上面mastert所有生成的文件拷貝到從節(jié)點(diǎn) [root@m1 system]# scp -r /opt/etcd/ root@172.16.90.148:/opt/ [root@m1 system]# scp /usr/lib/systemd/system/etcd.service root@172.16.90.148:/usr/lib/systemd/system/# 到從節(jié)點(diǎn)點(diǎn)修改配置文件 [root@n1 ~]# vim /opt/etcd/cfg/etcd.conf #[Member] ETCD_NAME="etcd-2" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://172.16.90.148:2380" ETCD_LISTEN_CLIENT_URLS="https://172.16.90.148:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.16.90.148:2380" ETCD_ADVERTISE_CLIENT_URLS="https://172.16.90.148:2379" ETCD_INITIAL_CLUSTER="etcd-1=https://172.16.90.147:2380,etcd-2=https://172.16.90.148:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new"# 啟動(dòng)并設(shè)置開(kāi)機(jī)啟動(dòng) # 重新服務(wù)的配置文件 [root@m1 system]# systemctl daemon-reload #啟動(dòng)etcd服務(wù) [root@m1 system]# systemctl start etcd #有錯(cuò)誤可用此命令查看日志;也可檢查是否啟動(dòng) [root@m1 system]# systemctl status etcd.service #將etcd服務(wù)設(shè)置為開(kāi)機(jī)啟動(dòng) [root@m1 system]# systemctl enable etcd# 在master節(jié)點(diǎn)執(zhí)行以下命令,檢查是否啟動(dòng)成功 [root@m1 ~]# ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://172.16.90.147:2379,https://172.16.90.148:2379" endpoint health https://172.16.90.147:2379 is healthy: successfully committed proposal: took = 23.120604ms https://172.16.90.148:2379 is healthy: successfully committed proposal: took = 24.304144ms #如果輸出上面信息,就說(shuō)明集群部署成功。如果有問(wèn)題第一步先看日志: /var/log/message 或 journalctl -u etcd
- 安裝docker# 以下在所有節(jié)點(diǎn)操作。這里采用二進(jìn)制安裝,用 yum 安裝也一樣。 # 解壓二進(jìn)制包 [root@m1 ~]# tar zxvf docker-19.03.9.tgz [root@m1 ~]# mv docker/* /usr/bin# systemd 管理 docker [root@m1 ~]# vim /usr/lib/systemd/system/docker.service [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network-online.target firewalld.service Wants=network-online.target [Service] Type=notify ExecStart=/usr/bin/dockerd ExecReload=/bin/kill -s HUP $MAINPID LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity TimeoutStartSec=0 Delegate=yes KillMode=process Restart=on-failure StartLimitBurst=3 StartLimitInterval=60s [Install] WantedBy=multi-user.target# 創(chuàng)建配置文件 # registry-mirrors 阿里云鏡像加速器 [root@m1 ~]# cat > /etc/docker/daemon.json << EOF > { > "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"] > } > EOF
- 為apiserver自簽證書(shū)# 主從節(jié)點(diǎn)間的訪(fǎng)問(wèn):添加可信任的ip列表 或者 寫(xiě)到CA證書(shū)發(fā)送 # 下列以 添加可信任的ip列表 為例進(jìn)行演示# 創(chuàng)建CA配置json文件 # 自簽證書(shū)頒發(fā)機(jī)構(gòu)(CA) [root@m1 k8s]# vim ca-config.json {"signing": {"default": {"expiry": "87600h"},"profiles": {"kubernetes": {"expiry": "87600h","usages": ["signing","key encipherment","server auth","client auth"]}}} }[root@m1 k8s]# vim ca-csr.json {"CN": "kubernetes","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "HuBei","ST": "WuHan","O": "k8s","OU": "System"}] }# 使用自簽 CA 簽發(fā) kube-apiserver HTTPS 證書(shū) # 創(chuàng)建apiserver證書(shū)的所需配置文件 [root@m1 k8s]# vim kube-proxy-csr.json {"CN": "system:kube-proxy","hosts": [],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "HuBei","ST": "WuHan","O": "k8s","OU": "System"}] }# 創(chuàng)建證書(shū)申請(qǐng)文件: [root@m1 k8s]# vim server-csr.json {"CN": "kubernetes","hosts": ["10.0.0.1","127.0.0.1","kubernetes","kubernetes.default","kubernetes.default.svc","kubernetes.default.svc.cluster","kubernetes.default.svc.cluster.local","172.16.90.147","172.16.90.148","172.16.90.149","172.16.90.150","172.16.90.151"],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "HuBei","ST": "WuHan","O": "k8s","OU": "System"}] } # 注:host中的最后幾個(gè)IP為需要連接apiserver的IP,一般為master集群的所有IP,和負(fù)載均衡LB的所有IP和VIP,本文中的IP # "10.16.8.150", master01 # "10.16.8.151", master02 # "10.16.8.156", LB # "10.16.8.155", 備用IP # "10.16.8.164" 備用IP # 其中10.16.8.168即可信任的IP列表 # 原配置: # ... # "10.0.0.1", # "127.0.0.1", # "kubernetes", # "kubernetes.default", # "kubernetes.default.svc", # "kubernetes.default.svc.cluster", # "kubernetes.default.svc.cluster.local", # "10.16.8.150", # "10.16.8.151", # "10.16.8.156", # "10.16.8.155", # "10.16.8.164" # ...# 自建CA,生成證書(shū) [root@m1 k8s]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -# 生成證書(shū) [root@k8s-master01 k8s]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server [root@k8s-master01 k8s]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy# 檢查證書(shū)生成情況 [root@m1 k8s]# ll *.pem [root@m1 k8s]# ll *.pem -rw-------. 1 root root 1679 7月 28 19:14 ca-key.pem -rw-r--r--. 1 root root 1346 7月 28 19:14 ca.pem -rw-------. 1 root root 1675 7月 28 19:24 kube-proxy-key.pem -rw-r--r--. 1 root root 1391 7月 28 19:24 kube-proxy.pem -rw-------. 1 root root 1675 7月 28 19:22 server-key.pem -rw-r--r--. 1 root root 1635 7月 28 19:22 server.pem# 啟動(dòng)并設(shè)置開(kāi)機(jī)啟動(dòng) systemctl daemon-reload systemctl start docker systemctl enable docker
- 部署master組件(由于未知錯(cuò)誤,筆者CentOS7無(wú)法識(shí)別 “”,此處有*.conf 和 *.server文件集合,驗(yàn)證碼:nht1)# 注:打開(kāi)鏈接你會(huì)發(fā)現(xiàn)里面有很多包,下載一個(gè) server 包就夠了,包含了 Master 和 Worker Node 二進(jìn)制文件。# 解壓二進(jìn)制包 [root@m1 ~]# mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs} [root@m1 kubernetes]# tar -xzvf kubernetes-server-linux-amd64.tar.gz [root@m1 ~]# cd kubernetes/server/bin [root@m1 bin]# cp kube-apiserver kube-scheduler kube-controller-manager /opt/kubernetes/bin [root@m1 bin]# cp kubectl /usr/bin/# 部署 kube-apiserver # 創(chuàng)建配置文件[root@m1 bin]# vim /opt/kubernetes/cfg/kube-apiserver.conf KUBE_APISERVER_OPTS="--logtostderr=false \ --v=2 \ --log-dir=/opt/kubernetes/logs \ --etcd-servers=https://172.16.90.147:2379,https://172.16.90.148:2379 \ --bind-address=172.16.90.147 \ --secure-port=6443 \ --advertise-address=172.16.90.147 \ --allow-privileged=true \ --service-cluster-ip-range=10.0.0.0/24 \ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \ --authorization-mode=RBAC,Node \ --enable-bootstrap-token-auth=true \ --token-auth-file=/opt/kubernetes/cfg/token.csv \ --service-node-port-range=30000-32767 \ --kubelet-client-certificate=/opt/kubernetes/ssl/server.pem \ --kubelet-client-key=/opt/kubernetes/ssl/server-key.pem \ --tls-cert-file=/opt/kubernetes/ssl/server.pem \ --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \ --client-ca-file=/opt/kubernetes/ssl/ca.pem \ --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \ --etcd-cafile=/opt/etcd/ssl/ca.pem \ --etcd-certfile=/opt/etcd/ssl/server.pem \ --etcd-keyfile=/opt/etcd/ssl/server-key.pem \ --audit-log-maxage=30 \ --audit-log-maxbackup=3 \ --audit-log-maxsize=100 \ --audit-log-path=/opt/kubernetes/logs/k8s-audit.log" # 配置詳解 # –logtostderr:啟用日志 # —v:日志等級(jí) # –log-dir:日志目錄 # –etcd-servers:etcd 集群地址 # –bind-address:監(jiān)聽(tīng)地址 # –secure-port:https 安全端口 # –advertise-address:集群通告地址 # –allow-privileged:啟用授權(quán) # –service-cluster-ip-range:Service 虛擬 IP 地址段 # –enable-admission-plugins:準(zhǔn)入控制模塊 # –authorization-mode:認(rèn)證授權(quán),啟用 RBAC 授權(quán)和節(jié)點(diǎn)自管理 # –enable-bootstrap-token-auth:啟用 TLS bootstrap 機(jī)制 # –token-auth-file:bootstrap token 文件 # –service-node-port-range:Service nodeport 類(lèi)型默認(rèn)分配端口范圍 # –kubelet-client-xxx:apiserver 訪(fǎng)問(wèn) kubelet 客戶(hù)端證書(shū) # –tls-xxx-file:apiserver https 證書(shū) # –etcd-xxxfile:連接 Etcd 集群證書(shū) # –audit-log-xxx:審計(jì)日志# 拷貝剛才生成的證書(shū),把剛才生成的證書(shū)拷貝到配置文件中的路徑 [root@m1 bin]# cp ~/TLS/k8s/ca*pem ~/TLS/k8s/server*pem /opt/kubernetes/ssl/# 啟用 TLS Bootstrapping 機(jī)制 # TLS Bootstraping:Master apiserver 啟用 TLS 認(rèn)證后,Node 節(jié)點(diǎn) kubelet 和 kube- proxy 要與 kube-apiserver 進(jìn)行通信,必須使用 CA 簽發(fā)的有效證書(shū)才可以,當(dāng) Node 節(jié)點(diǎn)很多時(shí),這種客戶(hù)端證書(shū)頒發(fā)需要大量工作,同樣也會(huì)增加集群擴(kuò)展復(fù)雜度。為了 簡(jiǎn)化流程,Kubernetes 引入了 TLS bootstraping 機(jī)制來(lái)自動(dòng)頒發(fā)客戶(hù)端證書(shū),kubelet 會(huì)以一個(gè)低權(quán)限用戶(hù)自動(dòng)向 apiserver 申請(qǐng)證書(shū),kubelet 的證書(shū)由 apiserver 動(dòng)態(tài)簽署。 # 所以強(qiáng)烈建議在 Node 上使用這種方式,目前主要用于 kubelet,kube-proxy 還是由我 們統(tǒng)一頒發(fā)一個(gè)證書(shū)。 # 創(chuàng)建上述配置文件中 token 文件: [root@m1 bin]# vim /opt/kubernetes/cfg/token.csv c47ffb939f5ca36231d9e3121a252940,kubelet-bootstrap,10001,"system:node-bootstrapper" # 格式:token,用戶(hù)名,UID,用戶(hù)組。token 也可自行生成替換: # head -c 16 /dev/urandom | od -An -t x | tr -d ' '# systemd 管理 apiserver [root@m1 bin]# vim /usr/lib/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver.conf ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target# 啟動(dòng)并設(shè)置開(kāi)機(jī)啟動(dòng) [root@m1 ~]# systemctl daemon-reload [root@m1 ~]# systemctl start kube-apiserver [root@m1 ~]# systemctl enable kube-apiserver# 授權(quán) kubelet-bootstrap 用戶(hù)允許請(qǐng)求證書(shū) [root@m1 k8s]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap# 部署 kube-controller-manager [root@m1 k8s]# vim /opt/kubernetes/cfg/kube-controller-manager.conf KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \ --v=2 \ --log-dir=/opt/kubernetes/logs \ --leader-elect=true \ --master=127.0.0.1:8080 \ --bind-address=127.0.0.1 \ --allocate-node-cidrs=true \ --cluster-cidr=10.244.0.0/16 \ --service-cluster-ip-range=10.0.0.0/24 \ --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \ --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \ --root-ca-file=/opt/kubernetes/ssl/ca.pem \ --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \ --experimental-cluster-signing-duration=87600h0m0s" # -master:通過(guò)本地非安全本地端口 8080 連接 apiserver # -leader-elect:當(dāng)該組件啟動(dòng)多個(gè)時(shí),自動(dòng)選舉(HA) # -cluster-signing-cert-file/–cluster-signing-key-file:自動(dòng)為 kubelet 頒發(fā)證書(shū) 的 CA,與 apiserver 保持一致# systemd 管理 controller-manager [root@m1 k8s]# vim /usr/lib/systemd/system/kube-controller-manager.service [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=/opt/kubernetes/cfg/kube-controller-manager.conf ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target# 啟動(dòng)并設(shè)置開(kāi)機(jī)啟動(dòng) [root@m1 k8s]# systemctl daemon-reload [root@m1 k8s]# systemctl start kube-controller-manager [root@m1 k8s]# systemctl enable kube-controller-manager# 部署 kube-scheduler [root@m1 k8s]# vim /opt/kubernetes/cfg/kube-scheduler.conf KUBE_SCHEDULER_OPTS="--logtostderr=false \ --v=2 \ --log-dir=/opt/kubernetes/logs \ --leader-elect \ --master=127.0.0.1:8080 \ --bind-address=127.0.0.1" # –master:通過(guò)本地非安全本地端口 8080 連接 apiserver。 # –leader-elect:當(dāng)該組件啟動(dòng)多個(gè)時(shí),自動(dòng)選舉(HA)# systemd 管理 scheduler [root@m1 k8s]# vim /usr/lib/systemd/system/kube-scheduler.service [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes[Service] EnvironmentFile=/opt/kubernetes/cfg/kube-scheduler.conf ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS Restart=on-failure[Install] WantedBy=multi-user.target# 啟動(dòng)并設(shè)置開(kāi)機(jī)啟動(dòng) [root@m1 k8s]# systemctl daemon-reload [root@m1 k8s]# systemctl start kube-scheduler [root@m1 k8s]# systemctl enable kube-scheduler# 查看集群狀態(tài) # 所有組件都已經(jīng)啟動(dòng)成功,通過(guò) kubectl 工具查看當(dāng)前集群組件狀態(tài): Warning: v1 ComponentStatus is deprecated in v1.19+ NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health":"true"} etcd-1 Healthy {"health":"true"} # 如上輸出說(shuō)明 Master 節(jié)點(diǎn)組件運(yùn)行正常。
- 部署node組件(由于未知錯(cuò)誤,筆者CentOS7無(wú)法識(shí)別 “”,此處有*.conf 和 *.server文件集合,驗(yàn)證碼:nht1)# 創(chuàng)建工作目錄并拷貝二進(jìn)制文件 # 在所有 worker node 創(chuàng)建工作目錄 [root@n1 ~]# mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}# 將m1上證書(shū)文件拷貝到n1 [root@m1 k8s]# scp ~/TLS/k8s/ca*pem ~/TLS/k8s/kube-proxy*pem root@172.16.90.148:/opt/kubernetes/ssl/# 解壓文件并拷貝配置 [root@n1 ~]# tar -xzvf kubernetes-node-linux-amd64.tar.gz [root@n1 ~]# cd kubernetes/node/bin/ [root@n1 bin]# ls kubeadm kubectl kubelet kube-proxy [root@n1 bin]# cp kubelet kube-proxy /opt/kubernetes/bin [root@n1 bin]# cp kubectl /usr/bin/# 部署 kubelet # 創(chuàng)建配置文件 [root@n1 bin]# vim /opt/kubernetes/cfg/kubelet.conf KUBELET_OPTS="--logtostderr=false \ --v=2 \ --log-dir=/opt/kubernetes/logs \ --hostname-override=m1 \ --network-plugin=cni \ --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \ --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \ --config=/opt/kubernetes/cfg/kubelet-config.yml \ --cert-dir=/opt/kubernetes/ssl \ --pod-infra-container-image=lizhenliang/pause-amd64:3.0" # –hostname-override:顯示名稱(chēng),集群中唯一 # –network-plugin:啟用 CNI # –kubeconfig:空路徑,會(huì)自動(dòng)生成,后面用于連接 apiserver # –bootstrap-kubeconfig:首次啟動(dòng)向 apiserver 申請(qǐng)證書(shū) # –config:配置參數(shù)文件 # –cert-dir:kubelet 證書(shū)生成目錄 # –pod-infra-container-image:管理 Pod 網(wǎng)絡(luò)容器的鏡像# 配置參數(shù)文件 [root@n1 bin]# vim /opt/kubernetes/cfg/kubelet-config.yml kind: KubeletConfiguration apiVersion: kubelet.config.k8s.io/v1beta1 address: 0.0.0.0 port: 10250 readOnlyPort: 10255 cgroupDriver: cgroupfs clusterDNS: - 10.0.0.2 clusterDomain: cluster.local failSwapOn: false authentication:anonymous:enabled: falsewebhook:cacheTTL: 2m0senabled: truex509:clientCAFile: /opt/kubernetes/ssl/ca.pem authorization:mode: Webhookwebhook:cacheAuthorizedTTL: 5m0scacheUnauthorizedTTL: 30s evictionHard: imagefs.available: 15% memory.available: 100Mi nodefs.available: 10% nodefs.inodesFree: 5% maxOpenFiles: 1000000 maxPods: 110# 生成 bootstrap.kubeconfig 文件 # 指定apiserver地址 [root@n1 kubernetes]# export KUBE_APISERVER="https://172.16.90.147:6443" # apiserver IP:PORT # 指定TOKEN值 [root@n1 kubernetes]# export TOKEN="c47ffb939f5ca36231d9e3121a252940" # 與 token.csv 里保持一致 # 設(shè)置集群參數(shù) [root@n1 kubernetes]# kubectl config set-cluster kubernetes \ > --certificate-authority=/opt/kubernetes/ssl/ca.pem \ > --embed-certs=true \ > --server=${KUBE_APISERVER} \ > --kubeconfig=bootstrap.kubeconfig Cluster "kubernetes" set. # 設(shè)置客戶(hù)端認(rèn)證參數(shù) [root@n1 kubernetes]# kubectl config set-credentials "kubelet-bootstrap" \ > --token=${TOKEN} \ > --kubeconfig=bootstrap.kubeconfig User "kubelet-bootstrap" set. # 設(shè)置上下文參數(shù) [root@n1 kubernetes]# kubectl config set-context default \ > --cluster=kubernetes \ > --user="kubelet-bootstrap" \ > --kubeconfig=bootstrap.kubeconfig Context "default" created. # 設(shè)置默認(rèn)上下文 [root@n1 kubernetes]# kubectl config use-context default --kubeconfig=bootstrap.kubeconfig Switched to context "default". # –embed-certs 為 true 時(shí)表示將 certificate-authority 證書(shū)寫(xiě)入到生成的 bootstrap.kubeconfig 文件中 # 拷貝到配置文件路徑 [root@n1 kubernetes]# cp bootstrap.kubeconfig /opt/kubernetes/cfg# systemd 管理 kubelet [root@n1 kubernetes]# vim /usr/lib/systemd/system/kubelet.service [Unit] Description=Kubernetes Kubelet After=docker.service [Service] EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target# 啟動(dòng)并設(shè)置開(kāi)機(jī)啟動(dòng) [root@n1 kubernetes]# systemctl daemon-reload [root@n1 kubernetes]# systemctl start kubelet [root@n1 kubernetes]# systemctl enable kubelet# 批準(zhǔn) kubelet 證書(shū)申請(qǐng)并加入集群 # 查看 kubelet 證書(shū)請(qǐng)求 [root@m1 ~]# kubectl get csr NAME AGE SIGNERNAME REQUESTOR CONDITION node-csr-j8VQGozwmrCDsJfBXTtA7MYwlmSMULb8WacDPTSniDY 61s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending# 批準(zhǔn)申請(qǐng) [root@m1 ~]# kubectl certificate approve node-csr-j8VQGozwmrCDsJfBXTtA7MYwlmSMULb8WacDPTSniDY certificatesigningrequest.certificates.k8s.io/node-csr-j8VQGozwmrCDsJfBXTtA7MYwlmSMULb8WacDPTSniDY approved# 查看節(jié)點(diǎn) [root@m1 ~]# kubectl get node NAME STATUS ROLES AGE VERSION m1 NotReady <none> 28s v1.19.13 # 注:由于網(wǎng)絡(luò)插件還沒(méi)有部署,節(jié)點(diǎn)會(huì)沒(méi)有準(zhǔn)備就緒 NotReady# 部署 kube-proxy # 創(chuàng)建配置文件 [root@n1 kubernetes]# vim /opt/kubernetes/cfg/kube-proxy.conf KUBE_PROXY_OPTS="--logtostderr=false \ --v=2 \ --log-dir=/opt/kubernetes/logs \ --config=/opt/kubernetes/cfg/kube-proxy-config.yml"# 配置參數(shù)文件 [root@n1 kubernetes]# vim /opt/kubernetes/cfg/kube-proxy-config.yml kind: KubeProxyConfiguration apiVersion: kubeproxy.config.k8s.io/v1alpha1 bindAddress: 0.0.0.0 metricsBindAddress: 0.0.0.0:10249 clientConnection:kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig hostnameOverride: n1 clusterCIDR: 10.0.0.0/24# 生成 kubeconfig 文件 # m1已經(jīng)設(shè)置集群參數(shù)KUBE_APISERVER,這里不再設(shè)置 # 設(shè)置集群參數(shù) [root@n1 kubernetes]# kubectl config set-cluster kubernetes \ > --certificate-authority=/opt/kubernetes/ssl/ca.pem \ > --embed-certs=true \ > --server=${KUBE_APISERVER} \ > --kubeconfig=kube-proxy.kubeconfig Cluster "kubernetes" set. # 設(shè)置客戶(hù)端認(rèn)證參數(shù) [root@n1 kubernetes]# kubectl config set-credentials kube-proxy \ > --client-certificate=/opt/kubernetes/ssl/kube-proxy.pem \ > --client-key=/opt/kubernetes/ssl/kube-proxy-key.pem \ > --embed-certs=true \ > --kubeconfig=kube-proxy.kubeconfig User "kube-proxy" set. # 設(shè)置上下文參數(shù) [root@n1 kubernetes]# kubectl config set-context default \ > --cluster=kubernetes \ > --user=kube-proxy \ > --kubeconfig=kube-proxy.kubeconfig Context "default" created. # 設(shè)置默認(rèn)上下文 [root@n1 kubernetes]# kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig Switched to context "default". # 設(shè)置集群參數(shù)和客戶(hù)端認(rèn)證參數(shù)時(shí) –embed-certs 都為 true,這會(huì)將 certificate-authority、client-certificate 和 client-key 指向的證書(shū)文件內(nèi)容寫(xiě)入到生成的 kube-proxy.kubeconfig 文件中 # 新增節(jié)點(diǎn)時(shí)只需,將bootstrap.kubeconfig和kube-proxy.kubeconfig文件分發(fā)到各node節(jié)點(diǎn)上# 拷貝到配置文件指定路徑 [root@n1 kubernetes]# cp kube-proxy.kubeconfig /opt/kubernetes/cfg/# systemd 管理 kube-proxy [root@n1 kubernetes]# vim /usr/lib/systemd/system/kube-proxy.service [Unit] Description=Kubernetes Proxy After=network.target [Service] EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target# 啟動(dòng)并設(shè)置開(kāi)機(jī)啟動(dòng) [root@n1 kubernetes]# systemctl daemon-reload [root@n1 kubernetes]# systemctl start kube-proxy [root@n1 kubernetes]# systemctl status kube-proxy ● kube-proxy.service - Kubernetes ProxyLoaded: loaded (/usr/lib/systemd/system/kube-proxy.service; disabled; vendor preset: disabled)Active: active (running) since 六 2021-07-31 19:03:13 CST; 2s ago ... [root@n1 kubernetes]# systemctl enable kube-proxy
- 部署集群(CNI)網(wǎng)絡(luò)# 解壓二進(jìn)制包并移動(dòng)到默認(rèn)工作目錄 [root@n1 ~]# mkdir -p /opt/cni/bin [root@n1 ~]# tar zxvf cni-plugins-linux-amd64-v0.8.6.tgz -C /opt/cni/bin# 部署CNI網(wǎng)絡(luò) [root@m1 ~]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml [root@m1 ~]# sed -i -r "s#quay.io/coreos/flannel:.*-amd64#lizhenliang/flannel:v0.12.0- amd64#g" kube-flannel.yml# 默認(rèn)鏡像地址無(wú)法訪(fǎng)問(wèn),修改為 docker hub 鏡像倉(cāng)庫(kù) [root@m1 ~]# kubectl apply -f kube-flannel.yml [root@m1 ~]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE kube-flannel-ds-2kqmz 1/1 Running 0 6m26s [root@m1 ~]# kubectl get node NAME STATUS ROLES AGE VERSION m1 Ready <none> 81m v1.19.13 # 部署好網(wǎng)絡(luò)插件,Node準(zhǔn)備就緒# 授權(quán) apiserver 訪(fǎng)問(wèn) kubelet [root@m1 ~]# vim apiserver-to-kubelet-rbac.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata:annotations:rbac.authorization.kubernetes.io/autoupdate: "true"labels:kubernetes.io/bootstrapping: rbac-defaultsname: system:kube-apiserver-to-kubelet rules:- apiGroups:- ""resources:- nodes/proxy- nodes/stats- nodes/log- nodes/spec- nodes/metrics- pods/logverbs: - "*" --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata:name: system:kube-apiservernamespace: "" roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:kube-apiserver-to-kubelet subjects:- apiGroup: rbac.authorization.k8s.iokind: Username: kubernetes [root@m1 ~]# kubectl apply -f apiserver-to-kubelet-rbac.yaml
- 新增加 Worker Node# 拷貝已部署好的 Node 相關(guān)文件到新節(jié)點(diǎn) # 在n1(172.16.90.148)節(jié)點(diǎn)將Worker Node涉及文件拷貝到新節(jié)點(diǎn)172.16.90.149 [root@n1 ~]# scp -r /opt/kubernetes root@172.16.90.149:/opt/ [root@n1 ~]# scp -r /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@172.16.90.149:/usr/lib/systemd/system [root@n1 ~]# scp -r /opt/cni/ root@172.16.90.149:/opt/ [root@n1 ~]# scp /opt/kubernetes/ssl/ca.pem root@172.16.90.149:/opt/kubernetes/ssl# 刪除kubelet證書(shū)和kubeconfig文件 [root@n2 ~]# rm /opt/kubernetes/cfg/kubelet.kubeconfig [root@n2 ~]# rm -f /opt/kubernetes/ssl/kubelet* # 注:這幾個(gè)文件是證書(shū)申請(qǐng)審批后自動(dòng)生成的,每個(gè)Node不同,必須刪除重新生成# 修改主機(jī)名 [root@n2 ~]# vim /opt/kubernetes/cfg/kubelet.conf ... --hostname-override=n2 ... [root@n2 ~]# vim /opt/kubernetes/cfg/kube-proxy-config.yml ... hostnameOverride: n2 ...# 啟動(dòng)并設(shè)置開(kāi)機(jī)啟動(dòng) [root@n2 ~]# systemctl daemon-reload [root@n2 ~]# systemctl start kubelet [root@n2 ~]# systemctl enable kubelet [root@n2 ~]# systemctl start kube-proxy [root@n2 ~]# systemctl enable kube-proxy# 在Master上批準(zhǔn)新Node kubelet證書(shū)申請(qǐng) # 與n1相同,此處略過(guò)# 查看 Node 狀態(tài) [root@m1 ~]# Kubectl get node
- 測(cè)試集群[root@m1 ~]# kubectl create deployment nginx --image=nginx deployment.apps/nginx created [root@m1 ~]# kubectl expose deployment nginx --port=80 --type=NodePort service/nginx exposed [root@m1 ~]# kubectl get pod,svc NAME READY STATUS RESTARTS AGE pod/nginx-6799fc88d8-j5fn2 0/1 ContainerCreating 0 20sNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 3d23h service/nginx NodePort 10.0.0.70 <none> 80:30465/TCP 7s # 瀏覽器訪(fǎng)問(wèn)地址:http://NodeIP:Port(例如:http://172.16.90.148:30465)
兩種搭建方式對(duì)比:對(duì)兩種搭建方式的總結(jié)
- kubeadm搭建k8s集群
- 安裝linux系統(tǒng)虛擬機(jī),并對(duì)系統(tǒng)進(jìn)行初始化操作
- 在所有節(jié)點(diǎn)(master、node)安裝docker(包括下載、修改倉(cāng)庫(kù)地址和yum源成阿里地址)、kubeadm(kubeadm join <Master 節(jié)點(diǎn)的 IP 和端口 >)、kubelet和kubectl
- 在master節(jié)點(diǎn)執(zhí)行初始化命令操作(kubeadm init \,指定鏡像源,使用阿里云鏡像)
- 部署網(wǎng)絡(luò)CNI插件(kubectl apply -f kube-flannel.yml)
- 在所有node節(jié)點(diǎn)上使用join命令把node節(jié)點(diǎn)添加到master
- 二進(jìn)制方式搭建k8s集群
- 安裝linux系統(tǒng)虛擬機(jī),并對(duì)系統(tǒng)進(jìn)行初始化操作
- 生成cfssl自簽證書(shū)(ca-key.pem、ca.pem、server-key.pem、server.pem)
- 部署ectd集群(本質(zhì)是把etcd服務(wù),交給systemd管理。把證書(shū)復(fù)制過(guò)來(lái),啟動(dòng)并設(shè)置開(kāi)機(jī)啟動(dòng))
- 為apiserver自簽證書(shū)(生成過(guò)程和etcd類(lèi)似)
- 部署master組件(apiserver、controller-manager、scheduler下載二進(jìn)制文件進(jìn)行安裝,交給systemd管理組件啟動(dòng)并設(shè)置開(kāi)機(jī)啟動(dòng))
- 部署node組件(docker、kubelet、kube-proxy下載二進(jìn)制文件進(jìn)行安裝,交給systemd管理組件啟動(dòng)并設(shè)置開(kāi)機(jī)啟動(dòng),最后批準(zhǔn)kubelet證書(shū)申請(qǐng)并加入集群)
- 部署CNI網(wǎng)絡(luò)插件
3. 核心技術(shù)(上)
kubernetes 集群命令行工具 kubectl:kubectl 是 Kubernetes 集群的命令行工具,通過(guò) kubectl 能夠?qū)罕旧磉M(jìn)行管理,并能 夠在集群上進(jìn)行容器化應(yīng)用的安裝部署。kubectl命令語(yǔ)法:kubectl [command] [TYPE] [NAME] [flags](command指定要對(duì)資源執(zhí)行的操作,例如create、get、describe和delete;TYPE指定資源類(lèi)型,資源類(lèi)型是大小寫(xiě)敏感的,開(kāi)發(fā)者能夠以單數(shù)、復(fù)數(shù)和縮略的形式,例如kubectl get pod pod1;NAME指定資源名稱(chēng),名稱(chēng)大小寫(xiě)敏感,如果省略名稱(chēng),則會(huì)顯示所有的資源例如:kubectl get pods);flags指定可選的參數(shù),例如可用-s或者-server參數(shù)指定kubernetes API server的地址和端口。獲取kubectl幫助的方法:kubectl --help,具體看某個(gè)操作kubectl get --help。kubectl 子命令使用分類(lèi):
- 基礎(chǔ)命令
- 部署和集群管理命令
- 故障和調(diào)試命令
- 其他命令
kubernetes集群YAML文件詳解:k8s集群中對(duì)資源管理和資源對(duì)象編排部署都可以通過(guò)聲明樣式(YAML)文件來(lái)解決,我們把這種文件叫做資源清單文件,通過(guò) kubectl 命令直接使用資源清單文件就可以實(shí)現(xiàn)對(duì)大量的資源對(duì)象進(jìn)行編排部署了。語(yǔ)法格式:通過(guò)縮進(jìn)表示層級(jí)關(guān)系;不能用tab進(jìn)行縮進(jìn),只能用空格;一般開(kāi)通縮進(jìn)兩個(gè)空格;字符后縮進(jìn)一個(gè)空格,比如冒號(hào),逗號(hào)等后面;使用—表示新的yaml文件開(kāi)始;使用#表示注釋。yaml文件有控制器定義和被控制對(duì)象組成:
快速編寫(xiě)yaml文件:使用kubectl create命令生成yaml文件,命令:kubectl create deployment web --image=nginx -o yaml --dry-run > web.yaml(用于還未真正部署);使用kubectl get命令導(dǎo)出yaml文件kubectl get deploy nginx -o=yaml --export > web.yaml(用于已經(jīng)部署好的情況)
Pod簡(jiǎn)介:Pod 是 k8s 系統(tǒng)中可以創(chuàng)建和管理的最小單元,是資源對(duì)象模型中由用戶(hù)創(chuàng)建或部署的最 小資源對(duì)象模型,也是在 k8s 上運(yùn)行容器化應(yīng)用的資源對(duì)象,其他的資源對(duì)象都是用來(lái)支 撐或者擴(kuò)展 Pod 對(duì)象功能的。k8s 不會(huì)直接處理容器,而是 Pod,Pod 是由一個(gè)或多個(gè)container組成
Pod 是 Kubernetes 的最重要概念,每一個(gè) Pod 都有一個(gè)特殊的被稱(chēng)為”根容器“的 Pause 容器。Pause 容器對(duì)應(yīng)的鏡 像屬于 Kubernetes 平臺(tái)的一部分,除了 Pause 容器,每個(gè) Pod 還包含一個(gè)或多個(gè)緊密相關(guān)的用戶(hù)業(yè)務(wù)容器(加入到Pause容器中)
pod基本概念:最小部署單元、包含多個(gè)容器(一組容器的集合)、各個(gè)pod中容器共享網(wǎng)絡(luò)命名空間、pod是短暫的
pod存在意義:創(chuàng)建容器使用docker,一個(gè)docker對(duì)應(yīng)一個(gè)容器(便于管理),一個(gè)容器對(duì)應(yīng)一個(gè)進(jìn)程,一個(gè)容器運(yùn)行一個(gè)應(yīng)用程序;pod是多進(jìn)程設(shè)計(jì),運(yùn)行多個(gè)應(yīng)用程序(一個(gè)pod有多個(gè)容器,一個(gè)容器里面運(yùn)行一個(gè)應(yīng)用程序);pod存在為了親密性應(yīng)用(兩個(gè)應(yīng)用之間進(jìn)行交互、網(wǎng)絡(luò)之間調(diào)用、兩個(gè)應(yīng)用需要頻繁調(diào)用)
pod與應(yīng)用、容器、節(jié)點(diǎn)、pos差別:每個(gè)pod都是應(yīng)用的一個(gè)實(shí)例,有專(zhuān)用的IP;一個(gè)pod可以有多個(gè)容器,彼此之間共享網(wǎng)絡(luò)和存儲(chǔ)資源,每個(gè)pod中有一個(gè)pause容器保存所有的容器狀態(tài),通過(guò)管理pause容器,達(dá)到管理pod中所有容器的效果;同一個(gè)pod中的容器總會(huì)被調(diào)度到相同node節(jié)點(diǎn),不同節(jié)點(diǎn)間pod的通信基于虛擬二層網(wǎng)絡(luò)技術(shù)實(shí)現(xiàn);普通的pod和靜態(tài)pod
pod特性:共享網(wǎng)絡(luò),pod中容器之間通過(guò)linux的namespace和group機(jī)制進(jìn)行隔離,所以要想實(shí)現(xiàn)網(wǎng)絡(luò)共享其前提是pod中所有容器都在一個(gè)namespace里面,其實(shí)現(xiàn)原理是先創(chuàng)建pause容器(也稱(chēng)作info容器),然后將其他業(yè)務(wù)容器加入到pause容器中,使得所有業(yè)務(wù)容器處于同一namespace中,對(duì)外則暴露info容器的ip、mac、port等信息;共享存儲(chǔ),采用docker的Volumn數(shù)據(jù)卷進(jìn)行持久化存儲(chǔ)到某一特定區(qū)間,所有node都可以訪(fǎng)問(wèn)該區(qū)間;生命周期短暫,當(dāng)pod所在節(jié)點(diǎn)發(fā)生故障,那么該節(jié)點(diǎn)的pod會(huì)被調(diào)度到其他節(jié)點(diǎn),而且被重新調(diào)度的pod是一個(gè)全新的pod;網(wǎng)絡(luò)平坦,K8S集群中的所有Pod都在同一個(gè)共享網(wǎng)絡(luò)地址空間中,也就是說(shuō)每個(gè)Pod都可以通過(guò)其他Pod的IP地址來(lái)實(shí)現(xiàn)訪(fǎng)問(wèn)
pod常見(jiàn)配置:拉取策略、資源限制、重啟機(jī)制和健康檢查
- 鏡像拉取策略
- 資源限制
- 重啟策略
- 健康檢查
pod調(diào)度策略:主要包括創(chuàng)建pod流程和pod調(diào)度影響因素兩部分
- 創(chuàng)建pod流程:首先在master通過(guò)apiserver創(chuàng)建pod節(jié)點(diǎn),隨后相關(guān)信息持久化到etcd中,此時(shí)scheduler是實(shí)時(shí)監(jiān)控apiserver當(dāng)檢查到有pod創(chuàng)建時(shí),它會(huì)通過(guò)apiserver讀取到該pod存放在etcd的信息,并通過(guò)自身調(diào)度算法將該pod調(diào)度到某個(gè)node節(jié)點(diǎn)上;被調(diào)度的node節(jié)點(diǎn)通過(guò)kubelet訪(fǎng)問(wèn)apiserver,并讀取到etcd中存放的信息,隨后通知docker創(chuàng)建該容器。
- 影響pod調(diào)度的因素:主要有資源限制(前面所講的request、limit)、節(jié)點(diǎn)選擇器、節(jié)點(diǎn)親和性、污點(diǎn)和污點(diǎn)容忍這幾個(gè)方面
controller簡(jiǎn)介:在集群上管理和運(yùn)行容器的對(duì)象,Pod是通過(guò)Controller實(shí)現(xiàn)應(yīng)用的運(yùn)維(比如伸縮、滾動(dòng)升級(jí)等等),Pod和Controller之間通過(guò)label標(biāo)簽建立關(guān)系,其圖示如下:
常用控制器deployment:用于部署無(wú)狀態(tài)應(yīng)用(例如web應(yīng)用、之前部署的nginx,都是無(wú)狀態(tài)應(yīng)用)、管理Pod和ReplicaSet、部署和滾動(dòng)升級(jí)等功能。常用于web服務(wù)和微服務(wù)的場(chǎng)景,以下是使用deploy部署應(yīng)用的過(guò)程:
常用控制器statefueset:pod分為無(wú)狀態(tài)pod(認(rèn)為pod都是一樣的;沒(méi)有順序要求;不用考慮在哪個(gè)node運(yùn)行;隨意進(jìn)行伸縮和拓展)和有狀態(tài)pod(以上因素都需要考慮到;讓每個(gè)pod獨(dú)立的保持pod啟動(dòng)順序和唯一性;唯一的網(wǎng)絡(luò)表示符,持久存儲(chǔ);有序,比如mysql主從),而statefueset用于有狀態(tài)pod控制。其創(chuàng)建過(guò)程分為無(wú)頭service和有狀態(tài)應(yīng)用:
常用控制器DaemonSet:用于部署守護(hù)進(jìn)程,在每個(gè)node上運(yùn)行一個(gè)pod,新加入的node也同樣運(yùn)行在一個(gè)pod里面(例如:在每個(gè)node節(jié)點(diǎn)安裝數(shù)據(jù)采集工具)
常用控制器Job:一次性任務(wù)
常用控制器cronJob:定時(shí)任務(wù)
Service簡(jiǎn)介:service用于定義一組pod訪(fǎng)問(wèn)規(guī)則。作用:防止pod失聯(lián)、定義一組pod訪(fǎng)問(wèn)策略(負(fù)載均衡)。起源如下圖:
常用的Service類(lèi)型:ClusterIP、NodePort、LoadBalancer
- ClusterIP:集群內(nèi)部使用(例如node1節(jié)點(diǎn)訪(fǎng)問(wèn)啟動(dòng)的pod,可利用 kubectl get svc查看分配的ip)
- NodePort:對(duì)外訪(fǎng)問(wèn)應(yīng)用使用(例如:通過(guò)ip訪(fǎng)問(wèn)系統(tǒng),在瀏覽器訪(fǎng)問(wèn)pod里面的nginx)
- loadBalancer:對(duì)外訪(fǎng)問(wèn)應(yīng)用使用,公有云亦可
4. 核心技術(shù)(下)
配置管理:以是否加密來(lái)區(qū)分使用,分為Secret(常用作憑證,作用是將加密數(shù)據(jù)存在etcd里面,讓pod容器以?huà)燧dvolume方式進(jìn)行訪(fǎng)問(wèn))和ConfigMap(常用作配置文件,作用是存儲(chǔ)不加密數(shù)據(jù)到etcd,讓Pod以變量或者Volume掛載到容器中)
- secret的使用
- ConfigMap的使用
集群安全機(jī)制:訪(fǎng)問(wèn)k8s集群時(shí)候,需要經(jīng)過(guò)以下三個(gè)步驟完成具體操作。而在進(jìn)行訪(fǎng)問(wèn)的時(shí)候,過(guò)程中都需要經(jīng)過(guò)apiserver,apiserver作為統(tǒng)一協(xié)調(diào)(類(lèi)比于門(mén)衛(wèi))。訪(fǎng)問(wèn)過(guò)程中需要證書(shū)、token或者用戶(hù)名+密碼;如果訪(fǎng)問(wèn)pod則需要serviceAccount
- 認(rèn)證:傳輸安全(對(duì)外不暴露8080端口,只能內(nèi)部訪(fǎng)問(wèn),對(duì)外使用端口6443)、認(rèn)證(客戶(hù)端認(rèn)證常用方式有:https證書(shū)認(rèn)證,基于ca證書(shū);http tokens認(rèn)證,通過(guò)token識(shí)別用戶(hù);http基本認(rèn)證,用戶(hù)名+密碼方式認(rèn)證)
- 鑒權(quán)(授權(quán)):基于RBAC進(jìn)行鑒權(quán)操作;基于角色訪(fǎng)問(wèn)控制
- 準(zhǔn)入控制:就是準(zhǔn)入控制器的列表,如果列表有請(qǐng)求內(nèi)容則通過(guò),否則就拒絕
RBAC簡(jiǎn)介:基于角色的訪(fǎng)問(wèn)控制
- 角色:role(特定命名空間訪(fǎng)問(wèn)權(quán)限)和ClusterRole(所有命名空間訪(fǎng)問(wèn)權(quán)限)
- 角色綁定:roleBinding(角色綁定到主體)和ClusterRoleBinding(集群角色綁定到主體)
- 主體:user(用戶(hù))、group(用戶(hù)組)、serviceAccount(服務(wù)賬號(hào))
RBAC實(shí)現(xiàn)鑒權(quán):以下用實(shí)例的方式表述其過(guò)程(以下TLS文件是以二進(jìn)制安裝方式環(huán)境為例)
引入Ingress控制器:在使用Service里面的NodePort(把端口對(duì)外暴露,通過(guò)ip:port方式進(jìn)行訪(fǎng)問(wèn))時(shí),其存在如下缺陷:在每個(gè)節(jié)點(diǎn)上都會(huì)起到端口,在訪(fǎng)問(wèn)時(shí)候通過(guò)任何檢點(diǎn),通過(guò)節(jié)點(diǎn)ip:port實(shí)現(xiàn)訪(fǎng)問(wèn);意味著每個(gè)端口只能使用一次,一個(gè)端口對(duì)應(yīng)一個(gè)應(yīng)用。但是實(shí)際訪(fǎng)問(wèn)中都是用域名,根據(jù)不同域名跳轉(zhuǎn)到不同服務(wù),Ingress正是為了解決此問(wèn)題而提出的解決方案,它使得pod和ingress通過(guò)service關(guān)聯(lián),ingress作為統(tǒng)一入口由service關(guān)聯(lián)一組pod,其原理如下圖所示:
使用ingress控制器:這里選擇官方維護(hù)的nginx控制器進(jìn)行部署,其步驟大致分為部署ingress controller和創(chuàng)建ingress規(guī)則,以下過(guò)程是使用Ingress對(duì)外暴露應(yīng)用
-
創(chuàng)建nginx應(yīng)用,對(duì)外暴露端口使用NodePort
-
部署ingress controller
apiVersion: v1 kind: Namespace metadata:name: ingress-nginxlabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginx---kind: ConfigMap apiVersion: v1 metadata:name: nginx-configurationnamespace: ingress-nginxlabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginx--- kind: ConfigMap apiVersion: v1 metadata:name: tcp-servicesnamespace: ingress-nginxlabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginx--- kind: ConfigMap apiVersion: v1 metadata:name: udp-servicesnamespace: ingress-nginxlabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginx--- apiVersion: v1 kind: ServiceAccount metadata:name: nginx-ingress-serviceaccountnamespace: ingress-nginxlabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginx--- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata:name: nginx-ingress-clusterrolelabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginx rules:- apiGroups:- ""resources:- configmaps- endpoints- nodes- pods- secretsverbs:- list- watch- apiGroups:- ""resources:- nodesverbs:- get- apiGroups:- ""resources:- servicesverbs:- get- list- watch- apiGroups:- ""resources:- eventsverbs:- create- patch- apiGroups:- "extensions"- "networking.k8s.io"resources:- ingressesverbs:- get- list- watch- apiGroups:- "extensions"- "networking.k8s.io"resources:- ingresses/statusverbs:- update--- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: Role metadata:name: nginx-ingress-rolenamespace: ingress-nginxlabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginx rules:- apiGroups:- ""resources:- configmaps- pods- secrets- namespacesverbs:- get- apiGroups:- ""resources:- configmapsresourceNames:# Defaults to "<election-id>-<ingress-class>"# Here: "<ingress-controller-leader>-<nginx>"# This has to be adapted if you change either parameter# when launching the nginx-ingress-controller.- "ingress-controller-leader-nginx"verbs:- get- update- apiGroups:- ""resources:- configmapsverbs:- create- apiGroups:- ""resources:- endpointsverbs:- get--- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata:name: nginx-ingress-role-nisa-bindingnamespace: ingress-nginxlabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginx roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: nginx-ingress-role subjects:- kind: ServiceAccountname: nginx-ingress-serviceaccountnamespace: ingress-nginx--- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata:name: nginx-ingress-clusterrole-nisa-bindinglabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginx roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: nginx-ingress-clusterrole subjects:- kind: ServiceAccountname: nginx-ingress-serviceaccountnamespace: ingress-nginx---apiVersion: apps/v1 kind: Deployment metadata:name: nginx-ingress-controllernamespace: ingress-nginxlabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginx spec:replicas: 1selector:matchLabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxtemplate:metadata:labels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxannotations:prometheus.io/port: "10254"prometheus.io/scrape: "true"spec:hostNetwork: true# wait up to five minutes for the drain of connectionsterminationGracePeriodSeconds: 300serviceAccountName: nginx-ingress-serviceaccountnodeSelector:kubernetes.io/os: linuxcontainers:- name: nginx-ingress-controllerimage: lizhenliang/nginx-ingress-controller:0.30.0args:- /nginx-ingress-controller- --configmap=$(POD_NAMESPACE)/nginx-configuration- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services- --udp-services-configmap=$(POD_NAMESPACE)/udp-services- --publish-service=$(POD_NAMESPACE)/ingress-nginx- --annotations-prefix=nginx.ingress.kubernetes.iosecurityContext:allowPrivilegeEscalation: truecapabilities:drop:- ALLadd:- NET_BIND_SERVICE# www-data -> 101runAsUser: 101env:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespaceports:- name: httpcontainerPort: 80protocol: TCP- name: httpscontainerPort: 443protocol: TCPlivenessProbe:failureThreshold: 3httpGet:path: /healthzport: 10254scheme: HTTPinitialDelaySeconds: 10periodSeconds: 10successThreshold: 1timeoutSeconds: 10readinessProbe:failureThreshold: 3httpGet:path: /healthzport: 10254scheme: HTTPperiodSeconds: 10successThreshold: 1timeoutSeconds: 10lifecycle:preStop:exec:command:- /wait-shutdown---apiVersion: v1 kind: LimitRange metadata:name: ingress-nginxnamespace: ingress-nginxlabels:app.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginx spec:limits:- min:memory: 90Micpu: 100mtype: Container
-
創(chuàng)建ingress規(guī)則
apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata:name: example-ingress spec:rules:- host: example.ingredemo.comhttp:paths:- path: /backend:serviceName: webservicePort: 80
-
在windows系統(tǒng)hosts文件中添加域名訪(fǎng)問(wèn)規(guī)則
-
驗(yàn)證
helm引入:之前部署應(yīng)用的基本過(guò)程都是編寫(xiě)yaml文件、deployment、service最后ingress,該方式是和部署單一的、少數(shù)服務(wù)的應(yīng)用,但是如果部署微服務(wù)項(xiàng)目,則可能有幾十個(gè)服務(wù),每個(gè)服務(wù)都有一套yaml文件,此時(shí)需要維護(hù)大量yaml文件,版本管理特別不方便。Helm就是為解決此問(wèn)題而產(chǎn)生的解決方案。helm是一個(gè)Kubernetes的包管理工具,就像Linux下的包管理器,如yum/apt等,可以很方便的將之前打包好的yaml文件部署到kubernetes上。使用helm可以把所有yaml作為一個(gè)整體管理,以實(shí)現(xiàn)yaml高效復(fù)用。helm有三個(gè)重要概念分別是helm(一個(gè)命令行客戶(hù)端工具)、chart(把yaml打包,是yaml集合)和release(基礎(chǔ)chart部署實(shí)體,應(yīng)用級(jí)別的版本管理)。helm在2019年發(fā)布V3版本,和之前版本相比有明顯變化:V3版本中刪除Tiller、release可以在不同命名空間重用、將chart推送到dorcker倉(cāng)庫(kù)中。
helm安裝使用:主要包括helm安裝、配置helm倉(cāng)庫(kù)、使用helm快速部署應(yīng)用
# 安裝helm [root@k8s-master ~]# tar -xzvf helm-v3.0.0-linux-amd64.tar.gz [root@k8s-master ~]# cd linux-amd64/ [root@k8s-master linux-amd64]# ls helm LICENSE README.md [root@k8s-master linux-amd64]# mv helm /usr/bin# 配置helm倉(cāng)庫(kù) # 添加存儲(chǔ)庫(kù) [root@k8s-master linux-amd64]# helm repo add stable http://mirror.azure.cn/kubernetes/charts #微軟倉(cāng)庫(kù),這個(gè)倉(cāng)庫(kù)推薦,基本 上官網(wǎng)有的 chart 這里都有 [root@k8s-master linux-amd64]# helm repo add aliyun https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts #阿里云倉(cāng)庫(kù) # 更新倉(cāng)庫(kù)源 [root@k8s-master linux-amd64]# helm repo update # 查看配置的存儲(chǔ)庫(kù) [root@k8s-master linux-amd64]# helm repo list # 刪除倉(cāng)庫(kù) [root@k8s-master linux-amd64]# helm repo remove stable# 使用helm快速部署應(yīng)用 # 使用命令搜索應(yīng)用,格式:helm search repo 名稱(chēng) [root@k8s-master ~]# helm search repo weave NAME CHART VERSION APP VERSION DESCRIPTION aliyun/weave-cloud 0.1.2 Weave Cloud is a add-on to Kubernetes which pro... aliyun/weave-scope 0.9.2 1.6.5 A Helm chart for the Weave Scope cluster visual... # 根據(jù)搜索內(nèi)容選擇安裝,格式:helm install 安裝之后名稱(chēng) 搜索之后應(yīng)用名稱(chēng) [root@k8s-master ~]# helm install ui aliyun/weave-scope # 查看安裝之后狀態(tài) [root@k8s-master ~]# helm list [root@k8s-master ~]# helm status ui# 查應(yīng)用狀態(tài)發(fā)現(xiàn),ui-weave-scope其TYPE為ClusterIP [root@k8s-master ~]# kubectl get svc # 修改其TYPE [root@k8s-master ~]# kubectl edit svc ui-weave-scope ...type: NodePort ... # 查看修改后狀態(tài)發(fā)現(xiàn),ui-weave-scope其TYPE為NodePort [root@k8s-master ~]# kubectl get svc自定義chart部署:過(guò)程見(jiàn)以下命令
# 使用命令創(chuàng)建chart,格式為:helm create chart 名稱(chēng) [root@k8s-master ~]# helm create mychart Creating mychart [root@k8s-master ~]# cd mychart/ #Chart.yaml:當(dāng)前chart屬性配置信息 #templates:編寫(xiě)yaml文件放在這個(gè)目錄 #values.yaml:yaml文件可以使用全局變量 [root@k8s-master mychart]# ls charts Chart.yaml templates values.yaml# 在templates文件夾中創(chuàng)建兩個(gè)yaml文件deployment.yaml和service.yaml [root@k8s-master mychart]# cd templates/ [root@k8s-master templates]# ls deployment.yaml _helpers.tpl ingress.yaml NOTES.txt serviceaccount.yaml service.yaml tests [root@k8s-master templates]# rm -rf ./* [root@k8s-master templates]# kubectl create deployment web1 --image=nginx --dry-run -o yaml > deployment.yaml W0817 20:20:53.148698 15888 helpers.go:557] --dry-run is deprecated and can be replaced with --dry-run=client. [root@k8s-master templates]# ls deployment.yaml [root@k8s-master templates]# kubectl expose deployment web1 --port=80 --target-port=80 --type=NodePort --dry-run -o yaml > service.yaml W0817 20:24:22.445043 21372 helpers.go:557] --dry-run is deprecated and can be replaced with --dry-run=client. Error from server (NotFound): deployments.apps "web1" not found [root@k8s-master templates]# ls deployment.yaml service.yaml # 此時(shí)打開(kāi)service.yaml發(fā)現(xiàn)為空 [root@k8s-master templates]# kubectl create deployment web1 --image=nginx deployment.apps/web1 created [root@k8s-master templates]# kubectl expose deployment web1 --port=80 --target-port=80 --type=NodePort --dry-run -o yaml > service.yaml # 此時(shí)service.yaml里面就有內(nèi)容 # 刪掉web1,以helm方式部署 [root@k8s-master templates]# kubectl delete deployment web1 deployment.apps "web1" deleted# 安裝mychart [root@k8s-master ~]# helm install web1 mychart/ # 檢查 [root@k8s-master ~]# kubectl get pods [root@k8s-master ~]# kubectl get svc# 應(yīng)用升級(jí) [root@k8s-master ~]# helm upgrade web1 mychart/chart模板使用:chart模板可實(shí)現(xiàn)yaml高效復(fù)用,其方式是通過(guò)chart的values.yaml文件定義全局變量傳遞參數(shù)(在values.yaml定義變量和值,在具體yaml文件中獲取定義變量值),動(dòng)態(tài)渲染模板,yaml內(nèi)容動(dòng)態(tài)傳入?yún)?shù)生成(在yaml文件中大體只有image、tag、label、port和replicas這幾個(gè)地方不同而已),其實(shí)現(xiàn)過(guò)程如下
# 在values.yaml定義變量和值 [root@k8s-master ~]# cd mychart/ [root@k8s-master mychart]# ls charts Chart.yaml templates values.yaml [root@k8s-master mychart]# vim values.yaml ... replicas: 1 image: nginx tag: 1.16 label: nginx port: 80 ...# 在templates的yaml文件使用values.yaml定義變量 # 通過(guò)表達(dá)式使用全局變量,其格式為:{{.Values.變量名稱(chēng)}} # 也常用{{.Release.Name}}避免生成隨機(jī)名稱(chēng) [root@k8s-master mychart]# cd templates/ [root@k8s-master templates]# vim deployment.yaml apiVersion: apps/v1 kind: Deployment metadata:name: {{ .Release.Name}}-deploy spec:replicas: 1selector:matchLabels:app: {{ .Values.label}}strategy: {}template:metadata:creationTimestamp: nulllabels:app: {{ .Values.label}}spec:containers:- image: {{ .Values.image}}name: nginxresources: {} status: {} [root@k8s-master templates]# vim deployment.yaml apiVersion: v1 kind: Service metadata:name: {{ .Release.Name}}-svc spec:ports:- port: {{ .Values.port}}protocol: TCPtargetPort: 80selector:app: {{ .Values.label}}type: NodePort status:loadBalancer: {} [root@k8s-master ~]# helm install --dry-run web2 mychart/ #檢查 [root@k8s-master ~]# kubectl get pods [root@k8s-master ~]# kubectl get svc持久化存儲(chǔ):數(shù)據(jù)卷emptydir是本地存儲(chǔ),pod重啟后數(shù)據(jù)會(huì)丟失,為了解決這一問(wèn)題就需要對(duì)數(shù)據(jù)進(jìn)行持久化存儲(chǔ)。實(shí)現(xiàn)這一方案,k8s使用nfs網(wǎng)絡(luò)存儲(chǔ)實(shí)現(xiàn)pod重啟數(shù)據(jù)還在,以下是實(shí)現(xiàn)過(guò)程
# 1. 準(zhǔn)備環(huán)境:創(chuàng)建一臺(tái)服務(wù)器安裝nfs服務(wù)端(筆者這里ip設(shè)置為172.16.90.134)并關(guān)閉防火墻 [root@90143-k8s-nfs ~]# systemctl disable --now firewalld Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service. Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service. [root@90143-k8s-nfs ~]# firewall-cmd --list-all FirewallD is not running # 2. 安裝nfs [root@90143-k8s-nfs ~]# yum install -y nfs-utils # 3. 創(chuàng)建掛載路徑 [root@90143-k8s-nfs ~]# mkdir -p /data/nfs # 4. 設(shè)置掛載路徑 [root@90143-k8s-nfs ~]# vim /etc/exports /data/nfs *(rw,no_root_squash) # 5. 在k8s的node節(jié)點(diǎn)安裝nfs [root@k8s-node1 ~]# yum install -y nfs-utils [root@k8s-node2 ~]# yum install -y nfs-utils # 6. 在nfs服務(wù)器中啟動(dòng)服務(wù) [root@90143-k8s-nfs ~]# systemctl start nfs [root@90143-k8s-nfs ~]# ps -ef | grep nfs avahi 5936 1 0 14:52 ? 00:00:00 avahi-daemon: running [90143-k8s-nfs.local] root 27593 2 0 15:10 ? 00:00:00 [nfsd4_callbacks] root 27599 2 0 15:10 ? 00:00:00 [nfsd] root 27600 2 0 15:10 ? 00:00:00 [nfsd] root 27601 2 0 15:10 ? 00:00:00 [nfsd] root 27602 2 0 15:10 ? 00:00:00 [nfsd] root 27603 2 0 15:10 ? 00:00:00 [nfsd] root 27604 2 0 15:10 ? 00:00:00 [nfsd] root 27605 2 0 15:10 ? 00:00:00 [nfsd] root 27606 2 0 15:10 ? 00:00:00 [nfsd] root 28030 7550 0 15:10 pts/0 00:00:00 grep --color=auto nfs # 7. 在k8s集群部署應(yīng)該用使用nfs持久網(wǎng)絡(luò)存儲(chǔ) [root@k8s-master ~]# mkdir pv [root@k8s-master ~]# vim pv/nfs-nginx.yaml apiVersion: apps/v1 kind: Deployment metadata:name: nginx-dep1 spec:replicas: 1selector:matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec:containers:- name: nginximage: nginxvolumeMounts:- name: wwwrootmountPath: /usr/share/nginx/htmlports:- containerPort: 80volumes:- name: wwwrootnfs:server: 172.16.90.143path: /data/nfs [root@k8s-master ~]# cd pv [root@k8s-master pv]# kubectl apply -f nfs-nginx.yaml deployment.apps/nginx-dep1 created [root@k8s-master pv]# kubectl describe pod nginx-dep1-776574d4d-hg647 [root@k8s-master pv]# kubectl exec -it nginx-dep1-776574d4d-mdtcd bash kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. root@nginx-dep1-776574d4d-mdtcd:/# ls /usr/share/nginx/html [root@90143-k8s-nfs ~]# cd /data/nfs/ hello nfs root@nginx-dep1-776574d4d-mdtcd:/# ls /usr/share/nginx/html index.html root@nginx-dep1-776574d4d-mdtcd:/# exit exit # 8. 驗(yàn)證,通過(guò)NodeIP:Port可瀏覽器查看 [root@k8s-master pv]# kubectl expose deployment nginx-dep1 --port=80 --target-port=80 --type=NodePort service/nginx-dep1 exposed [root@k8s-master pv]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 22d nginx NodePort 10.105.227.129 <none> 80:30640/TCP 22d nginx-dep1 NodePort 10.96.151.113 <none> 80:31987/TCP 12s web1 NodePort 10.101.103.2 <none> 80:31093/TCP 2d5hPV與PVC:PV(持久化存儲(chǔ),對(duì)存儲(chǔ)資源進(jìn)行抽象,對(duì)外提供可以調(diào)用的地方,本質(zhì)是生產(chǎn)者)與PVC(用于調(diào)用,不需要關(guān)心內(nèi)部實(shí)現(xiàn)細(xì)節(jié),本質(zhì)是消費(fèi)者)其實(shí)現(xiàn)流程如下圖所示:
#mermaid-svg-Gt1zp0CZe0Bw8wuX .label{font-family:'trebuchet ms', verdana, arial;font-family:var(--mermaid-font-family);fill:#333;color:#333}#mermaid-svg-Gt1zp0CZe0Bw8wuX .label text{fill:#333}#mermaid-svg-Gt1zp0CZe0Bw8wuX .node rect,#mermaid-svg-Gt1zp0CZe0Bw8wuX .node circle,#mermaid-svg-Gt1zp0CZe0Bw8wuX .node ellipse,#mermaid-svg-Gt1zp0CZe0Bw8wuX .node polygon,#mermaid-svg-Gt1zp0CZe0Bw8wuX .node path{fill:#ECECFF;stroke:#9370db;stroke-width:1px}#mermaid-svg-Gt1zp0CZe0Bw8wuX .node .label{text-align:center;fill:#333}#mermaid-svg-Gt1zp0CZe0Bw8wuX .node.clickable{cursor:pointer}#mermaid-svg-Gt1zp0CZe0Bw8wuX .arrowheadPath{fill:#333}#mermaid-svg-Gt1zp0CZe0Bw8wuX .edgePath .path{stroke:#333;stroke-width:1.5px}#mermaid-svg-Gt1zp0CZe0Bw8wuX .flowchart-link{stroke:#333;fill:none}#mermaid-svg-Gt1zp0CZe0Bw8wuX .edgeLabel{background-color:#e8e8e8;text-align:center}#mermaid-svg-Gt1zp0CZe0Bw8wuX .edgeLabel rect{opacity:0.9}#mermaid-svg-Gt1zp0CZe0Bw8wuX .edgeLabel span{color:#333}#mermaid-svg-Gt1zp0CZe0Bw8wuX .cluster rect{fill:#ffffde;stroke:#aa3;stroke-width:1px}#mermaid-svg-Gt1zp0CZe0Bw8wuX .cluster text{fill:#333}#mermaid-svg-Gt1zp0CZe0Bw8wuX div.mermaidTooltip{position:absolute;text-align:center;max-width:200px;padding:2px;font-family:'trebuchet ms', verdana, arial;font-family:var(--mermaid-font-family);font-size:12px;background:#ffffde;border:1px solid #aa3;border-radius:2px;pointer-events:none;z-index:100}#mermaid-svg-Gt1zp0CZe0Bw8wuX .actor{stroke:#ccf;fill:#ECECFF}#mermaid-svg-Gt1zp0CZe0Bw8wuX text.actor>tspan{fill:#000;stroke:none}#mermaid-svg-Gt1zp0CZe0Bw8wuX .actor-line{stroke:grey}#mermaid-svg-Gt1zp0CZe0Bw8wuX .messageLine0{stroke-width:1.5;stroke-dasharray:none;stroke:#333}#mermaid-svg-Gt1zp0CZe0Bw8wuX .messageLine1{stroke-width:1.5;stroke-dasharray:2, 2;stroke:#333}#mermaid-svg-Gt1zp0CZe0Bw8wuX #arrowhead path{fill:#333;stroke:#333}#mermaid-svg-Gt1zp0CZe0Bw8wuX .sequenceNumber{fill:#fff}#mermaid-svg-Gt1zp0CZe0Bw8wuX #sequencenumber{fill:#333}#mermaid-svg-Gt1zp0CZe0Bw8wuX #crosshead path{fill:#333;stroke:#333}#mermaid-svg-Gt1zp0CZe0Bw8wuX .messageText{fill:#333;stroke:#333}#mermaid-svg-Gt1zp0CZe0Bw8wuX .labelBox{stroke:#ccf;fill:#ECECFF}#mermaid-svg-Gt1zp0CZe0Bw8wuX .labelText,#mermaid-svg-Gt1zp0CZe0Bw8wuX .labelText>tspan{fill:#000;stroke:none}#mermaid-svg-Gt1zp0CZe0Bw8wuX .loopText,#mermaid-svg-Gt1zp0CZe0Bw8wuX .loopText>tspan{fill:#000;stroke:none}#mermaid-svg-Gt1zp0CZe0Bw8wuX .loopLine{stroke-width:2px;stroke-dasharray:2, 2;stroke:#ccf;fill:#ccf}#mermaid-svg-Gt1zp0CZe0Bw8wuX .note{stroke:#aa3;fill:#fff5ad}#mermaid-svg-Gt1zp0CZe0Bw8wuX .noteText,#mermaid-svg-Gt1zp0CZe0Bw8wuX .noteText>tspan{fill:#000;stroke:none}#mermaid-svg-Gt1zp0CZe0Bw8wuX .activation0{fill:#f4f4f4;stroke:#666}#mermaid-svg-Gt1zp0CZe0Bw8wuX .activation1{fill:#f4f4f4;stroke:#666}#mermaid-svg-Gt1zp0CZe0Bw8wuX .activation2{fill:#f4f4f4;stroke:#666}#mermaid-svg-Gt1zp0CZe0Bw8wuX .mermaid-main-font{font-family:"trebuchet ms", verdana, arial;font-family:var(--mermaid-font-family)}#mermaid-svg-Gt1zp0CZe0Bw8wuX .section{stroke:none;opacity:0.2}#mermaid-svg-Gt1zp0CZe0Bw8wuX .section0{fill:rgba(102,102,255,0.49)}#mermaid-svg-Gt1zp0CZe0Bw8wuX .section2{fill:#fff400}#mermaid-svg-Gt1zp0CZe0Bw8wuX .section1,#mermaid-svg-Gt1zp0CZe0Bw8wuX .section3{fill:#fff;opacity:0.2}#mermaid-svg-Gt1zp0CZe0Bw8wuX .sectionTitle0{fill:#333}#mermaid-svg-Gt1zp0CZe0Bw8wuX .sectionTitle1{fill:#333}#mermaid-svg-Gt1zp0CZe0Bw8wuX .sectionTitle2{fill:#333}#mermaid-svg-Gt1zp0CZe0Bw8wuX .sectionTitle3{fill:#333}#mermaid-svg-Gt1zp0CZe0Bw8wuX .sectionTitle{text-anchor:start;font-size:11px;text-height:14px;font-family:'trebuchet ms', verdana, arial;font-family:var(--mermaid-font-family)}#mermaid-svg-Gt1zp0CZe0Bw8wuX .grid .tick{stroke:#d3d3d3;opacity:0.8;shape-rendering:crispEdges}#mermaid-svg-Gt1zp0CZe0Bw8wuX .grid .tick text{font-family:'trebuchet ms', verdana, arial;font-family:var(--mermaid-font-family)}#mermaid-svg-Gt1zp0CZe0Bw8wuX .grid path{stroke-width:0}#mermaid-svg-Gt1zp0CZe0Bw8wuX .today{fill:none;stroke:red;stroke-width:2px}#mermaid-svg-Gt1zp0CZe0Bw8wuX .task{stroke-width:2}#mermaid-svg-Gt1zp0CZe0Bw8wuX .taskText{text-anchor:middle;font-family:'trebuchet ms', verdana, arial;font-family:var(--mermaid-font-family)}#mermaid-svg-Gt1zp0CZe0Bw8wuX .taskText:not([font-size]){font-size:11px}#mermaid-svg-Gt1zp0CZe0Bw8wuX .taskTextOutsideRight{fill:#000;text-anchor:start;font-size:11px;font-family:'trebuchet ms', verdana, arial;font-family:var(--mermaid-font-family)}#mermaid-svg-Gt1zp0CZe0Bw8wuX .taskTextOutsideLeft{fill:#000;text-anchor:end;font-size:11px}#mermaid-svg-Gt1zp0CZe0Bw8wuX .task.clickable{cursor:pointer}#mermaid-svg-Gt1zp0CZe0Bw8wuX .taskText.clickable{cursor:pointer;fill:#003163 !important;font-weight:bold}#mermaid-svg-Gt1zp0CZe0Bw8wuX .taskTextOutsideLeft.clickable{cursor:pointer;fill:#003163 !important;font-weight:bold}#mermaid-svg-Gt1zp0CZe0Bw8wuX .taskTextOutsideRight.clickable{cursor:pointer;fill:#003163 !important;font-weight:bold}#mermaid-svg-Gt1zp0CZe0Bw8wuX .taskText0,#mermaid-svg-Gt1zp0CZe0Bw8wuX .taskText1,#mermaid-svg-Gt1zp0CZe0Bw8wuX .taskText2,#mermaid-svg-Gt1zp0CZe0Bw8wuX .taskText3{fill:#fff}#mermaid-svg-Gt1zp0CZe0Bw8wuX .task0,#mermaid-svg-Gt1zp0CZe0Bw8wuX .task1,#mermaid-svg-Gt1zp0CZe0Bw8wuX .task2,#mermaid-svg-Gt1zp0CZe0Bw8wuX .task3{fill:#8a90dd;stroke:#534fbc}#mermaid-svg-Gt1zp0CZe0Bw8wuX .taskTextOutside0,#mermaid-svg-Gt1zp0CZe0Bw8wuX .taskTextOutside2{fill:#000}#mermaid-svg-Gt1zp0CZe0Bw8wuX .taskTextOutside1,#mermaid-svg-Gt1zp0CZe0Bw8wuX .taskTextOutside3{fill:#000}#mermaid-svg-Gt1zp0CZe0Bw8wuX .active0,#mermaid-svg-Gt1zp0CZe0Bw8wuX .active1,#mermaid-svg-Gt1zp0CZe0Bw8wuX .active2,#mermaid-svg-Gt1zp0CZe0Bw8wuX .active3{fill:#bfc7ff;stroke:#534fbc}#mermaid-svg-Gt1zp0CZe0Bw8wuX .activeText0,#mermaid-svg-Gt1zp0CZe0Bw8wuX .activeText1,#mermaid-svg-Gt1zp0CZe0Bw8wuX .activeText2,#mermaid-svg-Gt1zp0CZe0Bw8wuX .activeText3{fill:#000 !important}#mermaid-svg-Gt1zp0CZe0Bw8wuX .done0,#mermaid-svg-Gt1zp0CZe0Bw8wuX .done1,#mermaid-svg-Gt1zp0CZe0Bw8wuX .done2,#mermaid-svg-Gt1zp0CZe0Bw8wuX .done3{stroke:grey;fill:#d3d3d3;stroke-width:2}#mermaid-svg-Gt1zp0CZe0Bw8wuX .doneText0,#mermaid-svg-Gt1zp0CZe0Bw8wuX .doneText1,#mermaid-svg-Gt1zp0CZe0Bw8wuX .doneText2,#mermaid-svg-Gt1zp0CZe0Bw8wuX .doneText3{fill:#000 !important}#mermaid-svg-Gt1zp0CZe0Bw8wuX .crit0,#mermaid-svg-Gt1zp0CZe0Bw8wuX .crit1,#mermaid-svg-Gt1zp0CZe0Bw8wuX .crit2,#mermaid-svg-Gt1zp0CZe0Bw8wuX .crit3{stroke:#f88;fill:red;stroke-width:2}#mermaid-svg-Gt1zp0CZe0Bw8wuX .activeCrit0,#mermaid-svg-Gt1zp0CZe0Bw8wuX .activeCrit1,#mermaid-svg-Gt1zp0CZe0Bw8wuX .activeCrit2,#mermaid-svg-Gt1zp0CZe0Bw8wuX .activeCrit3{stroke:#f88;fill:#bfc7ff;stroke-width:2}#mermaid-svg-Gt1zp0CZe0Bw8wuX .doneCrit0,#mermaid-svg-Gt1zp0CZe0Bw8wuX .doneCrit1,#mermaid-svg-Gt1zp0CZe0Bw8wuX .doneCrit2,#mermaid-svg-Gt1zp0CZe0Bw8wuX .doneCrit3{stroke:#f88;fill:#d3d3d3;stroke-width:2;cursor:pointer;shape-rendering:crispEdges}#mermaid-svg-Gt1zp0CZe0Bw8wuX .milestone{transform:rotate(45deg) scale(0.8, 0.8)}#mermaid-svg-Gt1zp0CZe0Bw8wuX .milestoneText{font-style:italic}#mermaid-svg-Gt1zp0CZe0Bw8wuX .doneCritText0,#mermaid-svg-Gt1zp0CZe0Bw8wuX .doneCritText1,#mermaid-svg-Gt1zp0CZe0Bw8wuX .doneCritText2,#mermaid-svg-Gt1zp0CZe0Bw8wuX .doneCritText3{fill:#000 !important}#mermaid-svg-Gt1zp0CZe0Bw8wuX .activeCritText0,#mermaid-svg-Gt1zp0CZe0Bw8wuX .activeCritText1,#mermaid-svg-Gt1zp0CZe0Bw8wuX .activeCritText2,#mermaid-svg-Gt1zp0CZe0Bw8wuX .activeCritText3{fill:#000 !important}#mermaid-svg-Gt1zp0CZe0Bw8wuX .titleText{text-anchor:middle;font-size:18px;fill:#000;font-family:'trebuchet ms', verdana, arial;font-family:var(--mermaid-font-family)}#mermaid-svg-Gt1zp0CZe0Bw8wuX g.classGroup text{fill:#9370db;stroke:none;font-family:'trebuchet ms', verdana, arial;font-family:var(--mermaid-font-family);font-size:10px}#mermaid-svg-Gt1zp0CZe0Bw8wuX g.classGroup text .title{font-weight:bolder}#mermaid-svg-Gt1zp0CZe0Bw8wuX g.clickable{cursor:pointer}#mermaid-svg-Gt1zp0CZe0Bw8wuX g.classGroup rect{fill:#ECECFF;stroke:#9370db}#mermaid-svg-Gt1zp0CZe0Bw8wuX g.classGroup line{stroke:#9370db;stroke-width:1}#mermaid-svg-Gt1zp0CZe0Bw8wuX .classLabel .box{stroke:none;stroke-width:0;fill:#ECECFF;opacity:0.5}#mermaid-svg-Gt1zp0CZe0Bw8wuX .classLabel .label{fill:#9370db;font-size:10px}#mermaid-svg-Gt1zp0CZe0Bw8wuX .relation{stroke:#9370db;stroke-width:1;fill:none}#mermaid-svg-Gt1zp0CZe0Bw8wuX .dashed-line{stroke-dasharray:3}#mermaid-svg-Gt1zp0CZe0Bw8wuX #compositionStart{fill:#9370db;stroke:#9370db;stroke-width:1}#mermaid-svg-Gt1zp0CZe0Bw8wuX #compositionEnd{fill:#9370db;stroke:#9370db;stroke-width:1}#mermaid-svg-Gt1zp0CZe0Bw8wuX #aggregationStart{fill:#ECECFF;stroke:#9370db;stroke-width:1}#mermaid-svg-Gt1zp0CZe0Bw8wuX #aggregationEnd{fill:#ECECFF;stroke:#9370db;stroke-width:1}#mermaid-svg-Gt1zp0CZe0Bw8wuX #dependencyStart{fill:#9370db;stroke:#9370db;stroke-width:1}#mermaid-svg-Gt1zp0CZe0Bw8wuX #dependencyEnd{fill:#9370db;stroke:#9370db;stroke-width:1}#mermaid-svg-Gt1zp0CZe0Bw8wuX #extensionStart{fill:#9370db;stroke:#9370db;stroke-width:1}#mermaid-svg-Gt1zp0CZe0Bw8wuX #extensionEnd{fill:#9370db;stroke:#9370db;stroke-width:1}#mermaid-svg-Gt1zp0CZe0Bw8wuX .commit-id,#mermaid-svg-Gt1zp0CZe0Bw8wuX .commit-msg,#mermaid-svg-Gt1zp0CZe0Bw8wuX .branch-label{fill:lightgrey;color:lightgrey;font-family:'trebuchet ms', verdana, arial;font-family:var(--mermaid-font-family)}#mermaid-svg-Gt1zp0CZe0Bw8wuX .pieTitleText{text-anchor:middle;font-size:25px;fill:#000;font-family:'trebuchet ms', verdana, arial;font-family:var(--mermaid-font-family)}#mermaid-svg-Gt1zp0CZe0Bw8wuX .slice{font-family:'trebuchet ms', verdana, arial;font-family:var(--mermaid-font-family)}#mermaid-svg-Gt1zp0CZe0Bw8wuX g.stateGroup text{fill:#9370db;stroke:none;font-size:10px;font-family:'trebuchet ms', verdana, arial;font-family:var(--mermaid-font-family)}#mermaid-svg-Gt1zp0CZe0Bw8wuX g.stateGroup text{fill:#9370db;fill:#333;stroke:none;font-size:10px}#mermaid-svg-Gt1zp0CZe0Bw8wuX g.statediagram-cluster .cluster-label text{fill:#333}#mermaid-svg-Gt1zp0CZe0Bw8wuX g.stateGroup .state-title{font-weight:bolder;fill:#000}#mermaid-svg-Gt1zp0CZe0Bw8wuX g.stateGroup rect{fill:#ECECFF;stroke:#9370db}#mermaid-svg-Gt1zp0CZe0Bw8wuX g.stateGroup line{stroke:#9370db;stroke-width:1}#mermaid-svg-Gt1zp0CZe0Bw8wuX .transition{stroke:#9370db;stroke-width:1;fill:none}#mermaid-svg-Gt1zp0CZe0Bw8wuX .stateGroup .composit{fill:white;border-bottom:1px}#mermaid-svg-Gt1zp0CZe0Bw8wuX .stateGroup .alt-composit{fill:#e0e0e0;border-bottom:1px}#mermaid-svg-Gt1zp0CZe0Bw8wuX .state-note{stroke:#aa3;fill:#fff5ad}#mermaid-svg-Gt1zp0CZe0Bw8wuX .state-note text{fill:black;stroke:none;font-size:10px}#mermaid-svg-Gt1zp0CZe0Bw8wuX .stateLabel .box{stroke:none;stroke-width:0;fill:#ECECFF;opacity:0.7}#mermaid-svg-Gt1zp0CZe0Bw8wuX .edgeLabel text{fill:#333}#mermaid-svg-Gt1zp0CZe0Bw8wuX .stateLabel text{fill:#000;font-size:10px;font-weight:bold;font-family:'trebuchet ms', verdana, arial;font-family:var(--mermaid-font-family)}#mermaid-svg-Gt1zp0CZe0Bw8wuX .node circle.state-start{fill:black;stroke:black}#mermaid-svg-Gt1zp0CZe0Bw8wuX .node circle.state-end{fill:black;stroke:white;stroke-width:1.5}#mermaid-svg-Gt1zp0CZe0Bw8wuX #statediagram-barbEnd{fill:#9370db}#mermaid-svg-Gt1zp0CZe0Bw8wuX .statediagram-cluster rect{fill:#ECECFF;stroke:#9370db;stroke-width:1px}#mermaid-svg-Gt1zp0CZe0Bw8wuX .statediagram-cluster rect.outer{rx:5px;ry:5px}#mermaid-svg-Gt1zp0CZe0Bw8wuX .statediagram-state .divider{stroke:#9370db}#mermaid-svg-Gt1zp0CZe0Bw8wuX .statediagram-state .title-state{rx:5px;ry:5px}#mermaid-svg-Gt1zp0CZe0Bw8wuX .statediagram-cluster.statediagram-cluster .inner{fill:white}#mermaid-svg-Gt1zp0CZe0Bw8wuX .statediagram-cluster.statediagram-cluster-alt .inner{fill:#e0e0e0}#mermaid-svg-Gt1zp0CZe0Bw8wuX .statediagram-cluster .inner{rx:0;ry:0}#mermaid-svg-Gt1zp0CZe0Bw8wuX .statediagram-state rect.basic{rx:5px;ry:5px}#mermaid-svg-Gt1zp0CZe0Bw8wuX .statediagram-state rect.divider{stroke-dasharray:10,10;fill:#efefef}#mermaid-svg-Gt1zp0CZe0Bw8wuX .note-edge{stroke-dasharray:5}#mermaid-svg-Gt1zp0CZe0Bw8wuX .statediagram-note rect{fill:#fff5ad;stroke:#aa3;stroke-width:1px;rx:0;ry:0}:root{--mermaid-font-family: '"trebuchet ms", verdana, arial';--mermaid-font-family: "Comic Sans MS", "Comic Sans", cursive}#mermaid-svg-Gt1zp0CZe0Bw8wuX .error-icon{fill:#522}#mermaid-svg-Gt1zp0CZe0Bw8wuX .error-text{fill:#522;stroke:#522}#mermaid-svg-Gt1zp0CZe0Bw8wuX .edge-thickness-normal{stroke-width:2px}#mermaid-svg-Gt1zp0CZe0Bw8wuX .edge-thickness-thick{stroke-width:3.5px}#mermaid-svg-Gt1zp0CZe0Bw8wuX .edge-pattern-solid{stroke-dasharray:0}#mermaid-svg-Gt1zp0CZe0Bw8wuX .edge-pattern-dashed{stroke-dasharray:3}#mermaid-svg-Gt1zp0CZe0Bw8wuX .edge-pattern-dotted{stroke-dasharray:2}#mermaid-svg-Gt1zp0CZe0Bw8wuX .marker{fill:#333}#mermaid-svg-Gt1zp0CZe0Bw8wuX .marker.cross{stroke:#333}:root { --mermaid-font-family: "trebuchet ms", verdana, arial;}#mermaid-svg-Gt1zp0CZe0Bw8wuX {color: rgba(0, 0, 0, 0.75);font: ;}根據(jù)存儲(chǔ)容量匹配模式應(yīng)用部署定義pvc:綁定pv定義pv:包括ip和路徑 # 避免影響刪除nfs-nginx.yaml [root@k8s-master pv]# kubectl delete -f nfs-nginx.yaml deployment.apps "nginx-dep1" deleted [root@k8s-master pv]# kubectl get pods NAME READY STATUS RESTARTS AGE nginx-6799fc88d8-lj24f 1/1 Running 0 22d web-7866dfdb9f-7zg68 0/1 ImagePullBackOff 0 2d5h web-96d5df5c8-br8md 1/1 Running 0 2d5h # 定義pvc [root@k8s-master pv]# vim pvc.yaml apiVersion: apps/v1 kind: Deployment metadata:name: nginx-dep1 spec:replicas: 3selector:matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec:containers:- name: nginximage: nginxvolumeMounts:- name: wwwrootmountPath: /usr/share/nginx/htmlports:- containerPort: 80volumes:- name: wwwrootpersistentVolumeClaim:claimName: my-pvc---apiVersion: v1 kind: PersistentVolumeClaim metadata:name: my-pvc spec:accessModes:- ReadWriteManyresources:requests:storage: 5Gi [root@k8s-master pv]# kubectl apply -f pvc.yaml deployment.apps/nginx-dep1 created persistentvolumeclaim/my-pvc created [root@k8s-master pv]# vim pv.yaml apiVersion: v1 kind: PersistentVolume metadata:name: my-pv spec:capacity:storage: 5GiaccessModes:- ReadWriteManynfs:path: /data/nfsserver: 172.16.90.143 [root@k8s-master pv]# kubectl apply -f pv.yaml persistentvolume/my-pv created #檢查 [root@k8s-master pv]# kubectl get pv,pvc NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/my-pv 5Gi RWX Retain Bound default/my-pvc 43sNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/my-pvc Bound my-pv 5Gi RWX 3m13s [root@k8s-master pv]# kubectl get pods NAME READY STATUS RESTARTS AGE nginx-6799fc88d8-lj24f 1/1 Running 0 22d nginx-dep1-69f5bb95b-bwn2g 1/1 Running 0 4m11s nginx-dep1-69f5bb95b-gpn9f 1/1 Running 0 4m11s nginx-dep1-69f5bb95b-k9xhw 1/1 Running 0 4m11s web-7866dfdb9f-7zg68 0/1 ImagePullBackOff 0 2d5h web-96d5df5c8-br8md 1/1 Running 0 2d5h [root@k8s-master pv]# kubectl exec -it nginx-dep1-69f5bb95b-bwn2g bash kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. root@nginx-dep1-69f5bb95b-bwn2g:/# ls /usr/share/nginx/html/ index.html5. 日志管理
6. 監(jiān)控平臺(tái)
集群資源監(jiān)控:包括監(jiān)控指標(biāo)和監(jiān)控平臺(tái),其中
- 監(jiān)控指標(biāo)
- 集群監(jiān)控:節(jié)點(diǎn)資源利用率、節(jié)點(diǎn)數(shù)、運(yùn)行pods
- Pod監(jiān)控:容器指標(biāo)、應(yīng)用程序
- 監(jiān)控平臺(tái)搭建方案(prometheus+Grafana)
- prometheus:開(kāi)源的;監(jiān)控、報(bào)警、數(shù)據(jù)庫(kù);以HTTP協(xié)議周期性抓取被監(jiān)控組件狀態(tài);不需要復(fù)雜的集成過(guò)程,使用http接口接入就可以
- Grafana:開(kāi)源的數(shù)據(jù)分析和可視化工具;支持多種數(shù)據(jù)源
搭建過(guò)程:相關(guān)yaml文件地址(驗(yàn)證碼:mzzv)
- 啟動(dòng)Prometheus和Grafana[root@k8s-master ~]# mkdir pgmonitor [root@k8s-master ~]# cd pgmonitor/ # 將yaml文件上傳到 [root@k8s-master pgmonitor]# ls grafana node-exporter.yaml prometheus[root@k8s-master pgmonitor]# vim node-exporter.yaml --- apiVersion: apps/v1 kind: DaemonSet metadata:name: node-exporternamespace: kube-systemlabels:k8s-app: node-exporter spec:selector:matchLabels:k8s-app: node-exporter ... # 部署守護(hù)進(jìn)程 [root@k8s-master pgmonitor]# kubectl create -f node-exporter.yaml # 部署prometheus [root@k8s-master prometheus]# kubectl create -f rbac-setup.yaml [root@k8s-master prometheus]# kubectl create -f configmap.yaml [root@k8s-master prometheus]# vim prometheus.deploy.yml --- apiVersion: apps/v1 ... [root@k8s-master prometheus]# kubectl create -f prometheus.deploy.yml [root@k8s-master prometheus]# kubectl create -f prometheus.svc.yml # 檢查 [root@k8s-master prometheus]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-59d64cd4d4-dqx8m 1/1 Running 0 23d coredns-59d64cd4d4-z8pdq 1/1 Running 0 23d etcd-k8s-master 1/1 Running 0 23d kube-apiserver-k8s-master 1/1 Running 0 23d kube-controller-manager-k8s-master 1/1 Running 0 23d kube-flannel-ds-h7v2g 1/1 Running 0 23d kube-flannel-ds-xmzfh 1/1 Running 0 23d kube-flannel-ds-z9nbj 1/1 Running 0 23d kube-proxy-6c9cd 1/1 Running 0 23d kube-proxy-cnvfg 1/1 Running 0 23d kube-proxy-p4nx4 1/1 Running 0 23d kube-scheduler-k8s-master 1/1 Running 0 23d node-exporter-g68xs 1/1 Running 0 7m57s node-exporter-rk2rg 1/1 Running 0 7m57s prometheus-68546b8d9-xk7tx 1/1 Running 0 116s# 部署Grafana [root@k8s-master pgmonitor]# cd grafana/ [root@k8s-master grafana]# vim grafana-deploy.yaml apiVersion: apps/v1 kind: Deployment metadata:name: grafana-corenamespace: kube-systemlabels:app: grafanacomponent: core spec:replicas: 1selector:matchLabels:app: grafanacomponent: core ... [root@k8s-master grafana]# kubectl create -f grafana-deploy.yaml [root@k8s-master grafana]# kubectl create -f grafana-svc.yaml [root@k8s-master grafana]# kubectl create -f grafana-ing.yaml # 檢查 [root@k8s-master grafana]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-59d64cd4d4-dqx8m 1/1 Running 0 23d coredns-59d64cd4d4-z8pdq 1/1 Running 0 23d etcd-k8s-master 1/1 Running 0 23d grafana-core-85587c9c49-khvnk 1/1 Running 0 83s kube-apiserver-k8s-master 1/1 Running 0 23d kube-controller-manager-k8s-master 1/1 Running 0 23d kube-flannel-ds-h7v2g 1/1 Running 0 23d kube-flannel-ds-xmzfh 1/1 Running 0 23d kube-flannel-ds-z9nbj 1/1 Running 0 23d kube-proxy-6c9cd 1/1 Running 0 23d kube-proxy-cnvfg 1/1 Running 0 23d kube-proxy-p4nx4 1/1 Running 0 23d kube-scheduler-k8s-master 1/1 Running 0 23d node-exporter-g68xs 1/1 Running 0 17m node-exporter-rk2rg 1/1 Running 0 17m prometheus-68546b8d9-xk7tx 1/1 Running 0 11m# 查看打開(kāi)的端口號(hào) [root@k8s-master grafana]# kubectl get svc -n kube-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE grafana NodePort 10.111.191.90 <none> 3000:31708/TCP 4m32s kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 23d node-exporter NodePort 10.97.61.70 <none> 9100:31672/TCP 20m prometheus NodePort 10.111.182.252 <none> 9090:30003/TCP 14m [root@k8s-master grafana]# kubectl get svc -n kube-system -o wide
- 打開(kāi)Grafana,配置數(shù)據(jù)源,導(dǎo)入顯示模板,默認(rèn)用戶(hù)名密碼都是admin。最后配置prometheus數(shù)據(jù)源
7. 搭建高可用集群
步驟文檔地址:驗(yàn)證碼0x23
8. 集群項(xiàng)目部署實(shí)操
k8s集群部署java項(xiàng)目:以下將以java項(xiàng)目(驗(yàn)證碼c78o)為例實(shí)現(xiàn)這一流程
準(zhǔn)備Java項(xiàng)目
通過(guò)maven進(jìn)行打包
[root@15-package demo]# ls demojenkins [root@15-package demo]# cd demojenkins [root@15-package demojenkins]# mvn clean package [root@15-package demojenkins]# ls demojenkins.iml Dockerfile HELP.md mvnw mvnw.cmd pom.xml src target [root@15-package demojenkins]# cd target/ [root@15-package target]# ls classes demojenkins.jar demojenkins.jar.original generated-sources generated-test-sources maven-archiver maven-status surefire-reports test-classes制作鏡像
[root@15-package demojenkins]# ls demojenkins.iml Dockerfile HELP.md mvnw mvnw.cmd pom.xml src target [root@15-package demojenkins]# docker build -t java-demo-01:latest . [root@15-package demojenkins]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE java-demo-01 latest 3563dd6e175a 3 minutes ago 122MB openjdk 8-jdk-alpine a3562aa0b991 2 years ago 105MB# 簡(jiǎn)單測(cè)試鏡像 [root@62-cent demojenkins]# docker run -d -p 8111:8111 java-demo-01:latest -t上傳鏡像到鏡像服務(wù)器中(以阿里云為例)
部署鏡像暴露應(yīng)用
# 導(dǎo)出yaml [root@k8s-master ~]# kubectl create deployment javademo1 --image=registry.cn-hangzhou.aliyuncs.com/my_demo_space/java-project-01:1.0.0 --dry-run -o yaml > javademo1.yaml # 創(chuàng)建yaml [root@k8s-master ~]# kubectl apply -f javademo1.yaml deployment.apps/javademo1 created # 查看創(chuàng)建情況 [root@k8s-master ~]# kubectl get pods # 擴(kuò)容 [root@k8s-master ~]# kubectl scale deployment javademo1 --replicas=3 # 暴露端口 [root@k8s-master ~]# kubectl expose deployment javademo1 --port=8111 --target-port=8111 --type=NodePort#通過(guò)NodeIp:port訪(fǎng)問(wèn)即可總結(jié)
以上是生活随笔為你收集整理的【体系】Kubernetes容器管理的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。
- 上一篇: 0基础自学stm32(野火)——什么是寄
- 下一篇: sysbench代码剖析和实践