日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

4.kubernetes集群搭建

發布時間:2023/12/20 编程问答 22 豆豆
生活随笔 收集整理的這篇文章主要介紹了 4.kubernetes集群搭建 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

實驗環境

主機名IP備注
k8s-80192.168.188.802U2G、master
k8s-81192.168.188.811U1G、node
k8s-82192.168.188.821U1G、node

Kubernetes 有兩種方式,第一種是二進制的方式,可定制但是部署復雜容易出錯;第二種是kubeadm工具安裝,部署簡單,不可定制化。本次我們部署 kubeadm 版。

注意:本次試驗中寫的yaml文件或者創建各種資源都沒有指定命名空間,所以默認使用default空間;在生產上一定要指明命名空間,不然會默認使用default空間

環境準備

若無特別說明,都是所有機子都操作

1.1、關閉安全策略與防火墻

關閉防火墻是為了方便日常使用,不會給我們造成困擾。在生成環境中建議打開。

# 安全策略 # 永久關閉 sed -i 's#enforcing#disabled#g' /etc/sysconfig/selinux # 臨時關閉 setenforce 0# 防火墻 systemctl disable firewalld systemctl stop firewalld systemctl status firewalld

1.2、關閉 swap 分區

一旦觸發 swap,會導致系統性能急劇下降,所以一般情況下,K8S 要求關閉swap 分區。

第一種方法:關閉swap分區 swapoff -a sed -ri 's/.*swap.*/#&/' /etc/fstab 第二種方法:在k8s上設置忽略swap分區 echo 'KUBELET_EXTRA_ARGS="--fail-swap-on=false"' > /etc/sysconfig/kubelet

1.3、配置國內 yum 源

cd /etc/yum.repos.d/ mkdir bak mv ./* bak/ wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo yum makecache

1.4、升級內核版本

由于 Docker 運行需要較新的系統內核功能,例如 ipvs 等等,所以一般情況下,我們需要使用4.0+以上版本的系統內核。

### 載入公鑰 rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org### 安裝ELRepo rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm### 載入elrepo-kernel元數據 yum --disablerepo=\* --enablerepo=elrepo-kernel repolist # 34個### 查看可用的rpm包 yum --disablerepo=\* --enablerepo=elrepo-kernel list kernel*### 安裝長期支持版本的kernel yum --disablerepo=\* --enablerepo=elrepo-kernel install -y kernel-lt.x86_64### 刪除舊版本工具包 yum remove kernel-tools-libs.x86_64 kernel-tools.x86_64 -y### 安裝新版本工具包 yum --disablerepo=\* --enablerepo=elrepo-kernel install -y kernel-lt-tools.x86_64### 查看默認啟動順序 awk -F\' '$1=="menuentry " {print $2}' /etc/grub2.cfg CentOS Linux (4.4.183-1.el7.elrepo.x86_64) 7 (Core) CentOS Linux (3.10.0-327.10.1.el7.x86_64) 7 (Core) CentOS Linux (0-rescue-c52097a1078c403da03b8eddeac5080b) 7 (Core) #默認啟動的順序是從0開始,新內核是從頭插入(目前位置在0,而4.4.4的是在1),所以需要選擇0。 grub2-set-default 0 #重啟并檢查 rebootUbuntu16.04 #打開 http://kernel.ubuntu.com/~kernel-ppa/mainline/ 并選擇列表中選擇你需要的版本(以4.16.3為例)。 #接下來,根據你的系統架構下載 如下.deb 文件: Build for amd64 succeeded (see BUILD.LOG.amd64):linux-headers-4.16.3-041603_4.16.3-041603.201804190730_all.deblinux-headers-4.16.3-041603-generic_4.16.3-041603.201804190730_amd64.deblinux-image-4.16.3-041603-generic_4.16.3-041603.201804190730_amd64.deb #安裝后重啟即可 sudo dpkg -i *.deb

1.5、安裝 ipvs

ipvs 是系統內核中的一個模塊,其網絡轉發性能很高。一般情況下,我們首選ipvs。

# 安裝 IPVS yum install -y conntrack-tools ipvsadm ipset conntrack libseccomp# 加載 IPVS 模塊 cat > /etc/sysconfig/modules/ipvs.modules <<EOF #!/bin/bash ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack" for kernel_module in \${ipvs_modules}; do /sbin/modinfo -F filename \${kernel_module} > /dev/null 2>&1if [ $? -eq 0 ]; then/sbin/modprobe \${kernel_module}fi done EOF# 驗證 chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs

1.6、內核參數優化

內核參數優化的主要目的是使其更適合 kubernetes 的正常運行。

cat > /etc/sysctl.d/k8s.conf << EOF net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 fs.may_detach_mounts = 1 vm.overcommit_memory=1 vm.panic_on_oom=0 fs.inotify.max_user_watches=89100 fs.file-max=52706963 fs.nr_open=52706963 net.ipv4.tcp_keepalive_time = 600 net.ipv4.tcp.keepaliv.probes = 3 net.ipv4.tcp_keepalive_intvl = 15 net.ipv4.tcp.max_tw_buckets = 36000 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp.max_orphans = 327680 net.ipv4.tcp_orphan_retries = 3 net.ipv4.tcp_syncookies = 1 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.ip_conntrack_max = 65536 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.top_timestamps = 0 net.core.somaxconn = 16384 EOF# 立即生效 sysctl --system

1.7、安裝 Docker

主要是作為 k8s 管理的常用的容器工具之一。

# step 1: 安裝必要的一些系統工具 yum install -y yum-utils device-mapper-persistent-data lvm2# Step 2: 添加軟件源信息 yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo# Step 3 sed -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo# Step 4: 更新并安裝Docker-CE yum makecache fast yum -y install docker-ce# Step 4: 開啟Docker服務 service docker start systemctl enable docker # Step 5: 鏡像加速 mkdir -p /etc/docker tee /etc/docker/daemon.json <<-'EOF' {"registry-mirrors": ["https://niphmo8u.mirror.aliyuncs.com"] } EOF systemctl daemon-reload systemctl restart docker

1.8、同步集群時間

master

[root@k8s-80 ~]# vim /etc/chrony.conf [root@k8s-80 ~]# grep -Ev "#|^$" /etc/chrony.conf server 3.centos.pool.ntp.org iburst driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync allow 192.168.0.0/16 logdir /var/log/chrony

node

vim /etc/chrony.conf grep -Ev "#|^$" /etc/chrony.conf server 192.168.188.80 iburst driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony

all

systemctl restart chronyd # 驗證 date

1.9、映射

master

[root@k8s-80 ~]# vim /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.188.80 k8s-80 192.168.188.81 k8s-81 192.168.188.82 k8s-82[root@k8s-80 ~]# scp -p /etc/hosts 192.168.188.81:/etc/hosts [root@k8s-80 ~]# scp -p /etc/hosts 192.168.188.82:/etc/hosts

1.10、配置 Kubernetes 源

這里配置的是阿里源,可以去https://developer.aliyun.com/mirror/kubernetes?spm=a2c6h.13651102.0.0.3e221b11KGjWvc看教程

cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF#setenforce 0#yum install -y kubelet kubeadm kubectl#systemctl enable kubelet && systemctl start kubelet# 注意 # 由于官網未開放同步方式, 可能會有索引gpg檢查失敗的情況, 這時請用 yum install -y --nogpgcheck kubelet kubeadm kubectl 安裝# 這里安裝的是1.22.3版本 yum makecache --nogpgcheck #kubeadm與kubectl是命令,kubelet是一個服務 yum install -y kubelet-1.22.3 kubeadm-1.22.3 kubectl-1.22.3systemctl enable kubelet.service

1.11、鏡像拉取

因為國內無法直接拉取鏡像回來,所以這里自己構建阿里云的codeup與阿里云的容器鏡像服務進行構建拉取

master

# 打印 kubeadm 將使用的鏡像列表。 配置文件用于自定義任何鏡像或鏡像存儲庫的情況 #大版本兼容,小版本不一致不影響 [root@k8s-80 ~]# kubeadm config images list I0526 12:52:43.766362 3813 version.go:255] remote version is much newer: v1.24.1; falling back to: stable-1.22 k8s.gcr.io/kube-apiserver:v1.22.10 k8s.gcr.io/kube-controller-manager:v1.22.10 k8s.gcr.io/kube-scheduler:v1.22.10 k8s.gcr.io/kube-proxy:v1.22.10 k8s.gcr.io/pause:3.5 k8s.gcr.io/etcd:3.5.0-0 k8s.gcr.io/coredns/coredns:v1.8.4

新版代碼管理https://codeup.aliyun.com/

1.在codeup新建倉庫,然后新建目錄,再創建dockfile文件

2.返回容器鏡像服務,創建鏡像倉庫

3.綁定codeup

4.建立個人訪問令牌

5.返回容器鏡像服務,填寫綁定codeup的個人訪問令牌

6.修改規則,要指定分支目錄里的dockfile文件

7.拉取鏡像

all

# 構建好拉取鏡像下來 docker pull registry.cn-shenzhen.aliyuncs.com/uplooking/kube-apiserver:v1.22.10 docker pull registry.cn-shenzhen.aliyuncs.com/uplooking/kube-controller-manager:v1.22.10 docker pull registry.cn-shenzhen.aliyuncs.com/uplooking/kube-scheduler:v1.22.10 docker pull registry.cn-shenzhen.aliyuncs.com/uplooking/kube-proxy:v1.22.10 docker pull registry.cn-shenzhen.aliyuncs.com/uplooking/pause:3.5 docker pull registry.cn-shenzhen.aliyuncs.com/uplooking/etcd:3.5.0-0 docker pull registry.cn-shenzhen.aliyuncs.com/uplooking/coredns:v1.8.4# 重新打tag,還原成查詢出來的樣式 docker tag registry.cn-shenzhen.aliyuncs.com/uplooking/kube-apiserver:v1.22.10 k8s.gcr.io/kube-apiserver:v1.22.10 docker tag registry.cn-shenzhen.aliyuncs.com/uplooking/kube-controller-manager:v1.22.10 k8s.gcr.io/kube-controller-manager:v1.22.10 docker tag registry.cn-shenzhen.aliyuncs.com/uplooking/kube-scheduler:v1.22.10 k8s.gcr.io/kube-scheduler:v1.22.10 docker tag registry.cn-shenzhen.aliyuncs.com/uplooking/kube-proxy:v1.22.10 k8s.gcr.io/kube-proxy:v1.22.10 docker tag registry.cn-shenzhen.aliyuncs.com/uplooking/pause:3.5 k8s.gcr.io/pause:3.5 docker tag registry.cn-shenzhen.aliyuncs.com/uplooking/etcd:3.5.0-0 k8s.gcr.io/etcd:3.5.0-0 docker tag registry.cn-shenzhen.aliyuncs.com/uplooking/coredns:v1.8.4 k8s.gcr.io/coredns/coredns:v1.8.4 docker pull registry.cn-guangzhou.aliyuncs.com/testbydocker/apiserver:v1.22.10 docker pull registry.cn-guangzhou.aliyuncs.com/testbydocker/controller:v1.22.10 docker pull registry.cn-guangzhou.aliyuncs.com/testbydocker/etcd:v3.5.0 docker pull registry.cn-guangzhou.aliyuncs.com/testbydocker/pause:v3.5 docker pull registry.cn-guangzhou.aliyuncs.com/testbydocker/proxy:v1.22.10 docker pull registry.cn-guangzhou.aliyuncs.com/testbydocker/scheduler:v1.22.10 docker pull registry.cn-guangzhou.aliyuncs.com/testbydocker/coredns:v1.8.4 docker pull registry.cn-guangzhou.aliyuncs.com/testbydocker/apiserver:v1.22.10 docker pull registry.cn-guangzhou.aliyuncs.com/testbydocker/controller:v1.22.10 docker pull registry.cn-guangzhou.aliyuncs.com/testbydocker/etcd:v3.5.0 docker pull registry.cn-guangzhou.aliyuncs.com/testbydocker/pause:v3.5 docker pull registry.cn-guangzhou.aliyuncs.com/testbydocker/proxy:v1.22.10 docker pull registry.cn-guangzhou.aliyuncs.com/testbydocker/scheduler:v1.22.10 docker pull registry.cn-guangzhou.aliyuncs.com/testbydocker/coredns:v1.8.4

1.12、節點初始化

master

[root@k8s-master ~]# kubeadm init --help [root@k8s-80 ~]# kubeadm init --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.188.80 注:pod網段要與flannel網段一致

在安裝kubernetes的過程中,會出現

[root@k8s-80 ~]# tail -100 /var/log/messages # 查看日志可以看到 failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd"

文件驅動默認由systemd改成cgroupfs, 而我們安裝的docker使用的文件驅動是systemd, 造成不一致, 導致鏡像無法啟動

[root@k8s-master ~]# docker info |grep cgroup #查看驅動 Cgroup Driver: cgroupfs

現在有兩種方式, 一種是修改docker, 另一種是修改kubelet;這里采用第一種,第二種請看這篇文章https://www.cnblogs.com/hongdada/p/9771857.html

[root@k8s-80 ~]# vim /etc/docker/daemon.json {"exec-opts": ["native.cgroupdriver=systemd"], # 添加這配置"registry-mirrors": ["https://niphmo8u.mirror.aliyuncs.com"] } systemctl daemon-reload systemctl restart docker # 刪除起初初始化產生的文件,初始化提示里面會有 rm -rf XXX# 然后再執行這個清除命令 kubeadm reset# 重新初始化 [root@k8s-80 ~]# kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.188.80 # 初始化完畢在最后有兩個步驟提示,分別是在master創建目錄和一條24h時效的token,需要在規定時間內使用添加節點 mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config# get nodes 命令就提供了 Kubernetes 的狀態、角色和版本 # kubectl get no 或者 kubectl get nodes [root@k8s-80 ~]# kubectl get no NAME STATUS ROLES AGE VERSION k8s-80 NotReady control-plane,master 6m9s v1.22.3

node

[root@k8s02 ~]#kubeadm join 192.168.188.80:6443 --token cp36la.obg1332jj7wl11az \--discovery-token-ca-cert-hash sha256:ee5053647a18fc69b59b648c7e3f7a8f039d5553531d627793242d193879e0baThis node has joined the cluster # # 當失效的時候可以使用以下命令重新生成 # 新令牌 kubeadm token create --print-join-command 我的number kubeadm join 192.168.91.133:6443 --token 3do9zb.7unh9enw8gv7j4za \--discovery-token-ca-cert-hash sha256:b75f52f8e2ab753c1d18b73073e74393c72a3f8dc64e934765b93a38e7389385

master

[root@k8s-80 ~]# kubectl get no NAME STATUS ROLES AGE VERSION k8s-80 NotReady control-plane,master 6m55s v1.22.3 k8s-81 NotReady <none> 18s v1.22.3 k8s-82 NotReady <none> 8s v1.22.3# 每個 get 命令都可以使用 –namespace 或 -n 參數指定對應的命名空間。這點對于查看 kube-system 中的 Pods 會非常有用,因為這些 Pods 是 Kubernetes 自身運行所需的服務。[root@k8s-80 ~]# kubectl get po -n kube-system # 此時有幾個服務是無法使用,因為缺少網絡插件 [root@k8s-master ~]# kubectl get po -n kube-system NAME READY STATUS RESTARTS AGE coredns-78fcd69978-n6mfw 0/1 Pending 0 164m coredns-78fcd69978-xshwb 0/1 Pending 0 164m etcd-k8s-master 1/1 Running 0 165m kube-apiserver-k8s-master 1/1 Running 0 165m kube-controller-manager-k8s-master 1/1 Running 1 165m kube-proxy-g7z79 1/1 Running 0 144m kube-proxy-wl4ct 1/1 Running 0 145m kube-proxy-x59w9 1/1 Running 0 164m kube-scheduler-k8s-master 1/1 Running 1 165m

1.13、安裝網絡插件

關于flannel詳細介紹

Flannel網絡模型

kubernetes 需要使用第三方的網絡插件來實現 kubernetes 的網絡功能,這樣一來,安裝網絡插件成為必要前提;第三方網絡插件有多種,常用的有 flannel、calico 和 cannel(flannel+calico),不同的網絡組件,都提供基本的網絡功能,為各個 Node 節點提供 IP 網絡等。

kubernetes 設計了網絡模型,但卻將它的實現交給了網絡插件,CNI 網絡插件最主要的功能就是實現POD資源能夠跨主機進行通訊。常見的 CNI 網絡插件: 1. Flannel 2. Calico 3. Canal 4. Contiv 5. OpenContrail 6. NSX-T 7. Kube-router

這里使用flannel,可以來這里保存這個yml文件上傳到服務器https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml

flannel的github地址

master

[root@k8s-80 ~]# ls anaconda-ks.cfg flannel.yml# 某些命令需要配置文件,而 apply 命令可以在集群內調整配置文件應用于資源。雖然也可以通過命令行 standard in (STNIN) 來完成,但 apply 命令更好一些,因為它可以讓你知道如何使用集群,以及要應用哪種配置文件。 # 可以應用幾乎任何配置,但是一定要明確所要應用的配置,否則可能會引發意料之外的后果。[root@k8s-80 ~]# kubectl apply -f flannel.yml[root@k8s-master ~]# cat kube-flannel.yaml |grep Network"Network": "10.244.0.0/16",hostNetwork: true[root@k8s-master ~]# cat kube-flannel.yaml |grep -w image |grep -v "#"image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0image: docker.io/rancher/mirrored-flannelcni-flannel:v0.20.1image: docker.io/rancher/mirrored-flannelcni-flannel:v0.20.1

當拉不下來鏡像的時候可以從阿里云自己搭建的鏡像倉庫中的構建進行拉取,包括前面也是采用這個方法拉取的鏡像

all

# 拉取 docker pull registry.cn-shenzhen.aliyuncs.com/uplooking/mirrored-flannelcni-flannel:v0.17.0 docker pull registry.cn-shenzhen.aliyuncs.com/uplooking/mirrored-flannelcni-flannel-cni-plugin:v1.0.1# 打標簽 docker tag registry.cn-shenzhen.aliyuncs.com/uplooking/mirrored-flannelcni-flannel:v0.17.0 rancher/mirrored-flannelcni-flannel:v0.17.0 docker tag registry.cn-shenzhen.aliyuncs.com/uplooking/mirrored-flannelcni-flannel-cni-plugin:v1.0.1 rancher/mirrored-flannelcni-flannel-cni-plugin:v1.0.1

master

[root@k8s-80 ~]# kubectl get no # 檢查狀態,此時全為Ready證明集群初步完成且正常 NAME STATUS ROLES AGE VERSION k8s-80 Ready control-plane,master 47m v1.22.3 k8s-81 Ready <none> 41m v1.22.3 k8s-82 Ready <none> 40m v1.22.3

1.14、kube-proxy開啟ipvs

kubernetes 需要使用第三方的網絡插件來實現 kubernetes 的網絡功能,這樣一來,安裝網絡插件成為必要前提;第三方網絡插件有多種,常用的有 flanneld、calico 和 cannel(flanneld+calico),不同的網絡組件,都提供基本的網絡功能,為各個 Node 節點提供 IP 網絡等。默認使用iptables。

當創建好資源后,如果需要修改,該怎么辦?這時候就需要 kubectl edit 命令了。

可以用這個命令編輯集群中的任何資源。它會打開默認文本編輯器。

master

# 更改kube-proxy配置 [root@k8s-80 ~]# kubectl edit configmap kube-proxy -n kube-system 找到如下部分的內容minSyncPeriod: 0sscheduler: ""syncPeriod: 30skind: KubeProxyConfigurationmetricsBindAddress: 127.0.0.1:10249mode: "ipvs" # 加上這個nodePortAddresses: null 其中mode原來是空,默認為iptables模式,改為ipvs scheduler默認是空,默認負載均衡算法為輪訓編輯完,保存退出 3、刪除所有kube-proxy的pod kubectl delete pod xxx -n kube-system# kubectl delete po `kubectl get po -n kube-system | grep proxy | awk '{print $1}'` -n kube-system4、查看kube-proxy的pod日志 kubectl logs kube-proxy-xxx -n kube-system .有.....Using ipvs Proxier......即可.或者ipvsadm -l # 刪除對應kube-proxy的pod重新生成 # 刪除指定命名空間內的kube-proxy的pod # kubectl delete ns xxxx 刪除整個命名空間[root@k8s-80 ~]# kubectl get po -n kube-system NAME READY STATUS RESTARTS AGE coredns-78fcd69978-d8cv5 1/1 Running 0 6m43s coredns-78fcd69978-qp7f6 1/1 Running 0 6m43s etcd-k8s-80 1/1 Running 0 6m57s kube-apiserver-k8s-80 1/1 Running 0 6m59s kube-controller-manager-k8s-80 1/1 Running 0 6m58s kube-flannel-ds-88kmk 1/1 Running 0 2m58s kube-flannel-ds-wfvst 1/1 Running 0 2m58s kube-flannel-ds-wq2vz 1/1 Running 0 2m58s kube-proxy-4fpm9 1/1 Running 0 6m28s kube-proxy-hhb5s 1/1 Running 0 6m25s kube-proxy-jr5kl 1/1 Running 0 6m43s kube-scheduler-k8s-80 1/1 Running 0 6m57s[root@k8s-80 ~]# kubectl delete pod kube-proxy-4fpm9 -n kube-system pod "kube-proxy-4fpm9" deleted [root@k8s-80 ~]# kubectl delete pod kube-proxy-hhb5s -n kube-system pod "kube-proxy-hhb5s" deleted [root@k8s-80 ~]# kubectl delete pod kube-proxy-jr5kl -n kube-system pod "kube-proxy-jr5kl" deleted# 檢查集群狀態 [root@k8s-80 ~]# kubectl get po -n kube-system # 此時已經重新生成kube-proxy的pod# 檢查ipvs [root@k8s-80 ~]# ipvsadm -l

kubectl get

使用 get 命令可以獲取當前集群中可用的資源列表,包括:

  • Namespace
  • Pod
  • Node
  • Deployment
  • Service
  • ReplicaSet

每個 get 命令都可以使用 –namespace 或 -n 參數指定對應的命名空間。這點對于查看 kube-system 中的 Pods 會非常有用,因為這些 Pods 是 Kubernetes 自身運行所需的服務。

[root@k8s-80 ~]# kubectl get ns # 查看有什么命名空間 NAME STATUS AGE default Active 23h # 不用加-n 都能進的,默認空間 kube-node-lease Active 23h # 監控相關的空間 kube-public Active 23h # 公用空間 kube-system Active 23h # 系統空間 # 創建一個新的空間 [root@k8s-80 ~]# kubectl create ns dev namespace/dev created [root@k8s-80 ~]# kubectl get ns NAME STATUS AGE default Active 23h dev Active 1s kube-node-lease Active 23h kube-public Active 23h kube-system Active 23h# 不建議隨便刪除空間,會把所有資源都刪除,但是可以通過etcd找回 # 查看指定名稱空間的service信息 # 不帶命名空間就是默認default空間[root@k8s-80 ~]# kubectl get svc -n kube-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 23h # 使用 kubectl get endpoints 命令來驗證 DNS 的端點是否公開,解析的是k8s內部的 # 這些ip是虛擬的,在宿主機ip a或者 ifconfig查看沒有 [root@k8s-80 ~]# kubectl get ep -n kube-system NAME ENDPOINTS AGE kube-dns 10.244.2.8:53,10.244.2.9:53,10.244.2.8:53 + 3 more... 23h # 獲取指定名稱空間中的指定pod [root@k8s-80 ~]# kubectl get po -n kube-system NAME READY STATUS RESTARTS AGE coredns-78fcd69978-f9pcw 1/1 Running 3 (12h ago) 24h coredns-78fcd69978-hprbh 1/1 Running 3 (12h ago) 24h etcd-k8s-80 1/1 Running 4 (12h ago) 24h kube-apiserver-k8s-80 1/1 Running 4 (12h ago) 24h kube-controller-manager-k8s-80 1/1 Running 6 (98m ago) 24h kube-flannel-ds-28w79 1/1 Running 4 (12h ago) 24h kube-flannel-ds-bsw2t 1/1 Running 2 (12h ago) 24h kube-flannel-ds-rj57q 1/1 Running 3 (12h ago) 24h kube-proxy-d8hs2 1/1 Running 0 11h kube-proxy-gms7v 1/1 Running 0 11h kube-proxy-phbnk 1/1 Running 0 11h kube-scheduler-k8s-80 1/1 Running 6 (98m ago) 24h

創建pod

Kubernetes 中內建了很多 controller(控制器),這些相當于一個狀態機,用來控制 Pod 的具體狀態和行為。

Deployment 為 Pod 和 ReplicaSet 提供了一個聲明式定義(declarative)方法,用來替代以前的 ReplicationController 來方便的管理應用。典型的應用場景包括:

  • 定義 Deployment 來創建 Pod 和 ReplicaSet
  • 滾動升級和回滾應用
  • 擴容和縮容
  • 暫停和繼續 Deployment

1.1 yml文件建立pod

前面我們已經使用過了yml文件,這里還是要提醒一下注意縮進問題!!!

1.1.1 pod資源清單詳解

apiVersion:?v1??#必選,api的版本號 kind:?Deployment??#必選,pod的類型 metadata:??#必選,元數據name:?nginx???#必選,pod的名字namespace:?nginx??#可選,可指定pod所在的命名空間,不選默認為default命名空間labels:??#可選,不過一般寫上。標簽,是用來聯系上下文服務的-?app:?nginxannotations:??#可選,注釋列表-?app:?nginx spec:???#必選,pod的詳細屬性replicas:?3???#必選,生成的副本數,即生成3個podselector:??#副本選擇器matchLabels:??#匹配標簽,匹配的就是上面我們寫的標簽app:?nginxtemplate:??#匹配模板,也是上面我們寫的元數據matadata:labels:app:nginxspec:????#必選,容器的詳細屬性containers:??#必選,容器列表-?name:?nginx??#容器名字image:?nginx:1.20.2??#容器所用的鏡像版本ports:?containerPort:?80??#開放容器的80端口

? 寫一個簡單的pod

[root@k8s-80 ~]# mkdir /k8syaml [root@k8s-80 ~]# cd /k8syaml [root@k8s-80 k8syaml]# vim nginx.yaml # 來源于官網 apiVersion: apps/v1 kind: Deployment metadata:name: nginxlabels:app: nginx spec:replicas: 3selector:matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec:containers:- name: nginximage: nginx:1.20.2ports:- containerPort: 80 # 應用資源,創建并運行;create也能創建但是不運行 [root@k8s-80 k8syaml]# kubectl apply -f nginx.yaml [root@k8s-80 k8syaml]# kubectl get po # 默認空間里面創建pod NAME READY STATUS RESTARTS AGE nginx-deployment-66b6c48dd5-hpxc6 0/1 ContainerCreating 0 18s nginx-deployment-66b6c48dd5-nslqj 0/1 ContainerCreating 0 18s nginx-deployment-66b6c48dd5-vwxlp 0/1 ContainerCreating 0 18s[root@k8s-80 k8syaml]# kubectl describe po nginx-deployment-66b6c48dd5-hpxc6 # 把某個pod的詳細信息輸出 # 訪問的方式 # 進入pod啟動的容器 # kubectl exec -it podName -n nsName /bin/sh #進入容器 # kubectl exec -it podName -n nsName /bin/bash #進入容器[root@k8s-80 k8syaml]# kubectl exec -it nginx-deployment-66b6c48dd5-hpxc6 -- bash # 列出可用的API版本 [root@k8s-80 k8syaml]# kubectl api-versions # 可以找到對應在nginx.yaml中apiVersion [root@k8s-80 k8syaml]# kubectl get po # 查看空間里pod狀態 NAME READY STATUS RESTARTS AGE nginx-cc4b758d6-rrtcc 1/1 Running 0 2m14s nginx-cc4b758d6-vmrw5 1/1 Running 0 2m14s nginx-cc4b758d6-w84qb 1/1 Running 0 2m14s# 查看某一個pod的具體情況 [root@k8s-80 k8syaml]# kubectl describe po nginx-deployment-66b6c48dd5-b82ck # 所有節點信息都輸出 [root@k8s-80 k8syaml]# kubectl describe node # 一個節點建議pods不超出40個,20~30個最好(阿里建議)

定義 Service

Service 在 Kubernetes 中是一個 REST 對象,和 Pod 類似。 像所有的 REST 對象一樣,Service 定義可以基于 POST 方式,請求 API server 創建新的實例。 Service 對象的名稱必須是合法的 RFC 1035 標簽名稱.。

由前面的service理論我們可以知道,我們有時候是需要外部來訪問我們的pod的,那么這時候就要需要service來幫助。 service是一個抽象資源,它相當于附著在pod上面的代理層或者負載層,當我們訪問代理層就訪問到pod。常用的類型為ClusterIP與NodePort

Kubernetes ServiceTypes 允許指定你所需要的 Service 類型,默認是 ClusterIP。

Type 的取值以及行為如下:

  • ClusterIP:通過集群的內部 IP 暴露服務,選擇該值時服務只能夠在集群內部訪問。 這也是默認的 ServiceType。
  • NodePort:通過每個節點上的 IP 和靜態端口(NodePort)暴露服務。 NodePort 服務會路由到自動創建的 ClusterIP 服務。 通過請求 <節點 IP>:<節點端口>,你可以從集群的外部訪問一個 NodePort 服務。
  • LoadBalancer:使用云提供商的負載均衡器向外部暴露服務。 外部負載均衡器可以將流量路由到自動創建的 NodePort 服務和 ClusterIP 服務上。
  • ExternalName:通過返回 CNAME 和對應值,可以將服務映射到 externalName 字段的內容(例如,foo.bar.example.com)。 無需創建任何類型代理。

其中,NodePort 類型 Kubernetes 控制平面將在 --service-node-port-range 標志指定的范圍內分配端口(默認值:30000-32767);在生產上不是很建議,服務少的情況可以使用,但是當服務多的時候局限性就呈現出來,適合測試的時候使用

2.1 service的yml文件清單詳解

apiVersion:?v1??#必選,api的版本號 kind:?Service??#必選,類型為Service metadata:??#必選,元數據name:?nginx???#必選,service的名字,一般與服務名一樣namespace:?nginx??#可選,可指定service所在的命名空間,不選默認為default命名空間 spec:???#必選,詳細屬性type:???#可選,service的類型,默認為ClusterIPselector:??#副本選擇器app:?nginxports:??#必選,端口的詳細信息???-?port:?80??#svc暴露的端口targetPort:?80??#映射pod暴露的端口nodePort:?30005?#可選,范圍為:30000-32767

2.2 ClusterIP

ClusterIP是集群模式,只能集群內部訪問,是k8s默認的服務類型,外部是無法訪問的。其主要用于為集群內 Pod 訪問時,提供的固定訪問地址,默認是自動分配地址,可使用 ClusterIP 關鍵字指定固定 IP。

2.2.1 Cluster的yml文件

在這里,我們一般會把所有資源寫在一個yml文件中,所以我們在nginx.yml文件中繼續寫,但是記住兩種資源中間加—

cd?/opt/k8s vim?nginx.ymlapiVersion:?apps/v1 kind:?Deployment metadata:name:?nginxlabels:app:?nginx spec:replicas:?3selector:matchLabels:app:?nginxtemplate:metadata:labels:app:?nginxspec:containers:-?name:?nginximage:?nginx:1.20.1ports:-?containerPort:?80 --- apiVersion:?v1 kind:?Service metadata:name:?nginx spec:selector:app:?nginxports:-?name:?httpprotocol:?TCPport:?80targetPort:?80

2.2.2 運行并查看 IP

注意,apply有更新的作用,不用刪除再啟動

kubectl apply -f nginx.yml

?

kubectl get svc

我們可以看到已經多了一個IP

2.2.3 驗證

(1)這時我們集群內部訪問這個ip,發現可以訪問。

curl 10.101.187.222

?

(2)這時,我們在外部訪問一下,發現無法訪問。

?

(3)綜上所述,符合ClusterIP的特性,內部可訪問,外部不可訪問。

2.3 NodePort 類型

通過每個節點上的 IP 和靜態端口(NodePort)暴露服務。這里需要注意的是,當我們在節點開了這個端口后,那么在么一個節點上都可以訪問到這個IP端口,那么就可以訪問到服務。這種類型的缺點是端口只有2768個,當服務比端口多的時候,這種類型就不行了。

2.3.1 Noteport的yml文件

只是在ClusterIP類型的yml文件上多了type和nodePort,注意范圍。

cd?/opt/k8s vim?nginx.ymlapiVersion:?apps/v1 kind:?Deployment metadata:name:?nginxlabels:app:?nginx spec:replicas:?3selector:matchLabels:app:?nginxtemplate:metadata:labels:app:?nginxspec:containers:-?name:?nginximage:?nginx:1.20.1ports:-?containerPort:?80 --- apiVersion:?v1 kind:?Service metadata:name:?nginx spec:type:?NodePortselector:app:?nginxports:-?name:?httpprotocol:?TCPport:?80targetPort:?80nodePort:?30005

2.3.2 運行并查看 IP

kubectl apply -f nginx.yml

?

kubectl get svc

?

我們可以看到類型變成了NodePort,端口變成80:30005

2.3.3 驗證

(1)這時我們集群內部訪問這個ip,發現可以訪問。

curl 10.101.187.222

?

(2)這時,重要的是外網是否能訪問,我們在外部訪問一下,注意外部訪問是訪問我們的宿主機IP:30005(是service的IP是虛擬的,是k8s給的這個IP,所以在內部是可以訪問到,但是外部是不可訪問到的),發現可以訪問。

?

#瀏覽器輸入三臺宿主機的ip加30005此時能正常訪問,但是因為是4層代理有會話保持,所以輪詢效果比較難看到

(3)綜上所述,符合NodePort的特性,內部可訪問,外部也可訪問。

2.4 LoadBalancer

使用云提供商的負載均衡器向外部暴露服務,這種方式是將ing直接綁定到slb上。

2.4.1 LoadBalancer的yml文件

cd?/opt/k8s vim?nginx.ymlapiVersion:?apps/v1 kind:?Deployment metadata:name:?nginxlabels:app:?nginx spec:replicas:?3selector:matchLabels:app:?nginxtemplate:metadata:labels:app:?nginxspec:containers:-?name:?nginximage:?nginx:1.20.1ports:-?containerPort:?80 --- apiVersion:?v1 kind:?Service metadata:name:?nginx spec:type:?LoadBalancerselector:app:?nginxports:-?port:?80targetPort:?80

2.4.2 查看IP

?

2.5 ExternalName

ExternalName Service 是 Service 的一個特例,它沒有選擇器,也沒有定義任何端口或 Endpoints。它的作用是返回集群外 Service 的外部別名。它將外部地址經過集群內部的再一次封裝(實際上就是集群 DNS 服務器將 CNAME解析到了外部地址上),實現了集群內部訪問即可。

例如你們公司的鏡像倉庫,最開始是用 ip 訪問,等到后面域名下來了再使用域名訪問。你不可能去修改每處的引用。但是可以創建一個 ExternalName,首先指向到 ip,等后面再指向到域名。

2.5.1 ExternalName的yml文件

這里,我們驗證一下,在容器內部,訪問nginx,看下會不會跳轉到百度上

apiVersion:?apps/v1 kind:?Deployment metadata:name:?nginxlabels:app:?nginx spec:replicas:?3selector:matchLabels:app:?nginxtemplate:metadata:labels:app:?nginxspec:containers:-?name:?nginximage:?nginx:1.20.1ports:-?containerPort:?80 --- apiVersion:?v1 kind:?Service metadata:name:?nginx spec:type:?ExternalNameexternalName:?www.baidu.com

2.5.2 運行并查看狀態

kubectl apply -f second.yml

?

kubectl get svc

?

2.5.3 驗證

我們進入到容器內部

[root@master-01 k8s]# kubectl exec -it nginx-58b9b8ff79-hj4gv – bash

使用nslookup工具查看是否會跳轉

nslookup nginx

?

證明,采用ExternalName模式,在居群內部訪問服務名,會跳轉到我們設置好的地址中。

2.6 Ingress NGINX

[root@k8s-80 k8syaml]# kubectl delete -f nginx.yaml # 刪除 nginx.yaml 文件中定義的類型和名稱的 pod,全干掉 deployment.apps "nginx-deployment" deleted service "nginx" deleted

官方的yaml文件,我把其中的鏡像拉下來放置在阿里云上并進行對應修改,請根據這個https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.4.0/deploy/static/provider/baremetal/deploy.yaml修改

ingress-nginx apiVersion: v1 kind: Namespace metadata:labels:app.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxname: ingress-nginx --- apiVersion: v1 automountServiceAccountToken: true kind: ServiceAccount metadata:labels:app.kubernetes.io/component: controllerapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.4.0name: ingress-nginxnamespace: ingress-nginx --- apiVersion: v1 kind: ServiceAccount metadata:labels:app.kubernetes.io/component: admission-webhookapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.4.0name: ingress-nginx-admissionnamespace: ingress-nginx --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata:labels:app.kubernetes.io/component: controllerapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.4.0name: ingress-nginxnamespace: ingress-nginx rules: - apiGroups:- ""resources:- namespacesverbs:- get - apiGroups:- ""resources:- configmaps- pods- secrets- endpointsverbs:- get- list- watch - apiGroups:- ""resources:- servicesverbs:- get- list- watch - apiGroups:- networking.k8s.ioresources:- ingressesverbs:- get- list- watch - apiGroups:- networking.k8s.ioresources:- ingresses/statusverbs:- update - apiGroups:- networking.k8s.ioresources:- ingressclassesverbs:- get- list- watch - apiGroups:- ""resourceNames:- ingress-controller-leaderresources:- configmapsverbs:- get- update - apiGroups:- ""resources:- configmapsverbs:- create - apiGroups:- coordination.k8s.ioresourceNames:- ingress-controller-leaderresources:- leasesverbs:- get- update - apiGroups:- coordination.k8s.ioresources:- leasesverbs:- create - apiGroups:- ""resources:- eventsverbs:- create- patch - apiGroups:- discovery.k8s.ioresources:- endpointslicesverbs:- list- watch- get --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata:labels:app.kubernetes.io/component: admission-webhookapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.4.0name: ingress-nginx-admissionnamespace: ingress-nginx rules: - apiGroups:- ""resources:- secretsverbs:- get- create --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata:labels:app.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.4.0name: ingress-nginx rules: - apiGroups:- ""resources:- configmaps- endpoints- nodes- pods- secrets- namespacesverbs:- list- watch - apiGroups:- coordination.k8s.ioresources:- leasesverbs:- list- watch - apiGroups:- ""resources:- nodesverbs:- get - apiGroups:- ""resources:- servicesverbs:- get- list- watch - apiGroups:- networking.k8s.ioresources:- ingressesverbs:- get- list- watch - apiGroups:- ""resources:- eventsverbs:- create- patch - apiGroups:- networking.k8s.ioresources:- ingresses/statusverbs:- update - apiGroups:- networking.k8s.ioresources:- ingressclassesverbs:- get- list- watch - apiGroups:- discovery.k8s.ioresources:- endpointslicesverbs:- list- watch- get --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata:labels:app.kubernetes.io/component: admission-webhookapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.4.0name: ingress-nginx-admission rules: - apiGroups:- admissionregistration.k8s.ioresources:- validatingwebhookconfigurationsverbs:- get- update --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata:labels:app.kubernetes.io/component: controllerapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.4.0name: ingress-nginxnamespace: ingress-nginx roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: ingress-nginx subjects: - kind: ServiceAccountname: ingress-nginxnamespace: ingress-nginx --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata:labels:app.kubernetes.io/component: admission-webhookapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.4.0name: ingress-nginx-admissionnamespace: ingress-nginx roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: ingress-nginx-admission subjects: - kind: ServiceAccountname: ingress-nginx-admissionnamespace: ingress-nginx --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata:labels:app.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.4.0name: ingress-nginx roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: ingress-nginx subjects: - kind: ServiceAccountname: ingress-nginxnamespace: ingress-nginx --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata:labels:app.kubernetes.io/component: admission-webhookapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.4.0name: ingress-nginx-admission roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: ingress-nginx-admission subjects: - kind: ServiceAccountname: ingress-nginx-admissionnamespace: ingress-nginx --- apiVersion: v1 data:allow-snippet-annotations: "true" kind: ConfigMap metadata:labels:app.kubernetes.io/component: controllerapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.4.0name: ingress-nginx-controllernamespace: ingress-nginx --- apiVersion: v1 kind: Service metadata:labels:app.kubernetes.io/component: controllerapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.4.0name: ingress-nginx-controllernamespace: ingress-nginx spec:ipFamilies:- IPv4ipFamilyPolicy: SingleStackports:- appProtocol: httpname: httpport: 80protocol: TCPtargetPort: http- appProtocol: httpsname: httpsport: 443protocol: TCPtargetPort: httpsselector:app.kubernetes.io/component: controllerapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxtype: NodePort --- apiVersion: v1 kind: Service metadata:labels:app.kubernetes.io/component: controllerapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.4.0name: ingress-nginx-controller-admissionnamespace: ingress-nginx spec:ports:- appProtocol: httpsname: https-webhookport: 443targetPort: webhookselector:app.kubernetes.io/component: controllerapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxtype: ClusterIP --- apiVersion: apps/v1 kind: Deployment metadata:labels:app.kubernetes.io/component: controllerapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.4.0name: ingress-nginx-controllernamespace: ingress-nginx spec:minReadySeconds: 0revisionHistoryLimit: 10selector:matchLabels:app.kubernetes.io/component: controllerapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxtemplate:metadata:labels:app.kubernetes.io/component: controllerapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxspec:containers:- args:- /nginx-ingress-controller- --election-id=ingress-controller-leader- --controller-class=k8s.io/ingress-nginx- --ingress-class=nginx- --configmap=$(POD_NAMESPACE)/ingress-nginx-controller- --validating-webhook=:8443- --validating-webhook-certificate=/usr/local/certificates/cert- --validating-webhook-key=/usr/local/certificates/keyenv:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace- name: LD_PRELOADvalue: /usr/local/lib/libmimalloc.soimage: registry.k8s.io/ingress-nginx/controller:v1.4.0@sha256:34ee929b111ffc7aa426ffd409af44da48e5a0eea1eb2207994d9e0c0882d143imagePullPolicy: IfNotPresentlifecycle:preStop:exec:command:- /wait-shutdownlivenessProbe:failureThreshold: 5httpGet:path: /healthzport: 10254scheme: HTTPinitialDelaySeconds: 10periodSeconds: 10successThreshold: 1timeoutSeconds: 1name: controllerports:- containerPort: 80name: httpprotocol: TCP- containerPort: 443name: httpsprotocol: TCP- containerPort: 8443name: webhookprotocol: TCPreadinessProbe:failureThreshold: 3httpGet:path: /healthzport: 10254scheme: HTTPinitialDelaySeconds: 10periodSeconds: 10successThreshold: 1timeoutSeconds: 1resources:requests:cpu: 100mmemory: 90MisecurityContext:allowPrivilegeEscalation: truecapabilities:add:- NET_BIND_SERVICEdrop:- ALLrunAsUser: 101volumeMounts:- mountPath: /usr/local/certificates/name: webhook-certreadOnly: truednsPolicy: ClusterFirstnodeSelector:kubernetes.io/os: linuxserviceAccountName: ingress-nginxterminationGracePeriodSeconds: 300volumes:- name: webhook-certsecret:secretName: ingress-nginx-admission --- apiVersion: batch/v1 kind: Job metadata:labels:app.kubernetes.io/component: admission-webhookapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.4.0name: ingress-nginx-admission-createnamespace: ingress-nginx spec:template:metadata:labels:app.kubernetes.io/component: admission-webhookapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.4.0name: ingress-nginx-admission-createspec:containers:- args:- create- --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc- --namespace=$(POD_NAMESPACE)- --secret-name=ingress-nginx-admissionenv:- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespaceimage: registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20220916-gd32f8c343@sha256:39c5b2e3310dc4264d638ad28d9d1d96c4cbb2b2dcfb52368fe4e3c63f61e10fimagePullPolicy: IfNotPresentname: createsecurityContext:allowPrivilegeEscalation: falsenodeSelector:kubernetes.io/os: linuxrestartPolicy: OnFailuresecurityContext:fsGroup: 2000runAsNonRoot: truerunAsUser: 2000serviceAccountName: ingress-nginx-admission --- apiVersion: batch/v1 kind: Job metadata:labels:app.kubernetes.io/component: admission-webhookapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.4.0name: ingress-nginx-admission-patchnamespace: ingress-nginx spec:template:metadata:labels:app.kubernetes.io/component: admission-webhookapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.4.0name: ingress-nginx-admission-patchspec:containers:- args:- patch- --webhook-name=ingress-nginx-admission- --namespace=$(POD_NAMESPACE)- --patch-mutating=false- --secret-name=ingress-nginx-admission- --patch-failure-policy=Failenv:- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespaceimage: registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20220916-gd32f8c343@sha256:39c5b2e3310dc4264d638ad28d9d1d96c4cbb2b2dcfb52368fe4e3c63f61e10fimagePullPolicy: IfNotPresentname: patchsecurityContext:allowPrivilegeEscalation: falsenodeSelector:kubernetes.io/os: linuxrestartPolicy: OnFailuresecurityContext:fsGroup: 2000runAsNonRoot: truerunAsUser: 2000serviceAccountName: ingress-nginx-admission --- apiVersion: networking.k8s.io/v1 kind: IngressClass metadata:labels:app.kubernetes.io/component: controllerapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.4.0name: nginx spec:controller: k8s.io/ingress-nginx --- apiVersion: admissionregistration.k8s.io/v1 kind: ValidatingWebhookConfiguration metadata:labels:app.kubernetes.io/component: admission-webhookapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.4.0name: ingress-nginx-admission webhooks: - admissionReviewVersions:- v1clientConfig:service:name: ingress-nginx-controller-admissionnamespace: ingress-nginxpath: /networking/v1/ingressesfailurePolicy: FailmatchPolicy: Equivalentname: validate.nginx.ingress.kubernetes.iorules:- apiGroups:- networking.k8s.ioapiVersions:- v1operations:- CREATE- UPDATEresources:- ingressessideEffects: None

master

[root@k8s-80 k8syaml]# kubectl apply -f ingress-nginx.yml[root@k8s-80 k8syaml]# kubectl get po -n ingress-nginx NAME READY STATUS RESTARTS AGE ingress-nginx-admission-create-wjb9d 0/1 Completed 0 12s ingress-nginx-admission-patch-s9pc8 0/1 Completed 0 12s ingress-nginx-controller-6b548d5677-t42qc 0/1 ContainerCreating 0 12s注:如果拉不下來,就使用阿里云倉庫 docker pull registry.cn-guangzhou.aliyuncs.com/testbydocker/webhook-certgen:v20220916 docker pull registry.cn-guangzhou.aliyuncs.com/testbydocker/controller_ingress:v1.4.0建議修改yaml文件內的image[root@k8s-80 k8syaml]# kubectl describe po ingress-nginx-admission-create-wjb9d -n ingress-nginx[root@k8s-80 k8syaml]# kubectl get po -n ingress-nginx # admission是密鑰不用管 NAME READY STATUS RESTARTS AGE ingress-nginx-admission-create-wjb9d 0/1 Completed 0 48s ingress-nginx-admission-patch-s9pc8 0/1 Completed 0 48s ingress-nginx-controller-6b548d5677-t42qc 1/1 Running 0 48s[root@k8s-80 k8syaml]# kubectl get svc -n ingress-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx-controller NodePort 10.101.104.205 <none> 80:30799/TCP,443:31656/TCP 66s ingress-nginx-controller-admission ClusterIP 10.107.116.128 <none> 443/TCP 66s
測試
[root@k8s-80 k8syaml]# vim nginx.yaml # 更換鏡像源,這里使用的是阿里云拉取下來的鏡像 registry.cn-shenzhen.aliyuncs.com/adif0028/nginx_php:74v3[root@k8s-80 k8syaml]# kubectl apply -f nginx.yaml[root@k8s-80 k8syaml]# kubectl get po NAME READY STATUS RESTARTS AGE nginx-b65884ff7-24f4j 1/1 Running 0 49s nginx-b65884ff7-8qss6 1/1 Running 0 49s nginx-b65884ff7-vhnbt 1/1 Running 0 49s[root@k8s-80 k8syaml]# kubectl exec -it nginx-b65884ff7-24f4j -- bash [root@nginx-b65884ff7-24f4j /]# yum provides nslookup [root@nginx-b65884ff7-24f4j /]# yum -y install bind-utils[root@nginx-b65884ff7-24f4j /]# nslookup kubernetes Server: 10.96.0.10 Address: 10.96.0.10#53Name: kubernetes.default.svc.cluster.local Address: 10.96.0.1[root@nginx-b65884ff7-24f4j /]# nslookup nginx Server: 10.96.0.10 Address: 10.96.0.10#53Name: nginx.default.svc.cluster.local Address: 10.111.201.41[root@nginx-b65884ff7-24f4j /]# curl nginx.default.svc.cluster.local # 是正常輸出html [root@nginx-b65884ff7-24f4j /]# curl nginx # 也能訪問,其實訪問的就是service的名字
開啟集群模式
# 把Service里的type和nodePort注釋掉 [root@k8s-80 k8syaml]# vim nginx.yaml apiVersion: apps/v1 kind: Deployment metadata:name: nginxlabels:app: nginx spec:replicas: 3selector:matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec:containers:- name: nginximage: registry.cn-shenzhen.aliyuncs.com/adif0028/nginx_php:74v3ports:- containerPort: 80 --- apiVersion: v1 kind: Service metadata:name: nginx spec:#type: NodePortselector:app: nginxports:- name: httpprotocol: TCPport: 80targetPort: 80#nodePort: 30005 [root@k8s-80 k8syaml]# kubectl apply -f nginx.yaml[root@k8s-80 k8syaml]# kubectl exec -it nginx-b65884ff7-24f4j -- bash [root@nginx-b65884ff7-24f4j /]# curl nginx # 是從內部解析的# 另開一個終端 [root@k8s-80 /]# kubectl get svc # 可以看到其實解析的就是name(nginx),當使用無頭服務的時候就是使用CLUSTER-IP NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 30m nginx ClusterIP 10.111.201.41 <none> 80/TCP 4m59s [root@k8s-80 /]# kubectl get svc -n ingress-nginx # 查看Service暴露的端口 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx-controller NodePort 10.101.104.205 <none> 80:30799/TCP,443:31656/TCP 7m13s ingress-nginx-controller-admission ClusterIP 10.107.116.128 <none> 443/TCP 7m13s[root@nginx-b65884ff7-24f4j /]# exit
開啟ingress
[root@k8s-80 /]# kubectl explain ing # 查看VERSION[root@k8s-80 k8syaml]# vi nginx.yaml # 增加了kind: Ingress那一項匹配值 apiVersion: apps/v1 kind: Deployment metadata:name: nginxlabels:app: nginx spec:replicas: 3selector:matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec:containers:- name: nginximage: registry.cn-shenzhen.aliyuncs.com/adif0028/nginx_php:74v3ports:- containerPort: 80 --- apiVersion: v1 kind: Service metadata:name: nginx spec:#type: NodePortselector:app: nginxports:- name: httpprotocol: TCPport: 80targetPort: 80#nodePort: 30005 --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata:name: nginx spec:ingressClassName: nginxrules:- host: www.lin.comhttp:paths:- path: /pathType: Prefixbackend:service:name: nginxport:number: 80 [root@k8s-80 k8syaml]# kubectl apply -f nginx.yaml[root@k8s-80 k8syaml]# kubectl get po NAME READY STATUS RESTARTS AGE nginx-b65884ff7-24f4j 1/1 Running 0 6m38s nginx-b65884ff7-8qss6 1/1 Running 0 6m38s nginx-b65884ff7-vhnbt 1/1 Running 0 6m38s # 修改成php頁面[root@k8s-80 k8syaml]# kubectl exec -it nginx-b65884ff7-24f4j -- bash [root@nginx-b65884ff7-24f4j /]# mv /usr/local/nginx/html/index.html /usr/local/nginx/html/index.php [root@nginx-b65884ff7-24f4j /]# >/usr/local/nginx/html/index.php [root@nginx-b65884ff7-24f4j /]# vi /usr/local/nginx/html/index.php <? phpinfo(); ?> [root@nginx-b65884ff7-24f4j /]# /etc/init.d/php-fpm restart Gracefully shutting down php-fpm . done Starting php-fpm done [root@nginx-b65884ff7-24f4j /]# exit exit# 其余兩個副本皆是如此操作 # 修改Windows上的host文件,做好解析 192.168.188.80 www.lin.com # 瀏覽器訪問域名也只能是域名,但是需要加端口號 ==> www.lin.com:30799,會出現PHP的頁面 # 不知道端口號的,使用命令kubectl get svc -n ingress-nginx

反代
  • 安裝nginx

    選擇在master上安裝,因為此時master壓力較小

    # 這里是使用腳本 # 請注意配置 Kubernetes 源使用阿里源當時說明了由于官網未開放同步方式, 可能會有索引gpg檢查失敗的情況 # 所以需要先執行這兩條命令 [root@k8s-80 ~]# sed -i "s#repo_gpgcheck=1#repo_gpgcheck=0#g" /etc/yum.repos.d/kubernetes.repo [root@k8s-80 ~]# sed -i "s#gpgcheck=1#gpgcheck=0#g" /etc/yum.repos.d/kubernetes.repo[root@k8s-80 ~]# sh nginx.sh
  • 修改配置文件實現四層轉發

    [root@k8s-80 ~]# cp /usr/local/nginx/conf/nginx.conf /usr/local/nginx/conf/nginx.conf.bak [root@k8s-80 /]# grep -Ev "#|^$" /usr/local/nginx/conf/nginx.conf worker_processes 1; events {worker_connections 1024; } stream {upstream tcp_proxy {server 192.168.188.80:30799;}server {listen 80;proxy_pass tcp_proxy;} } http {include mime.types;default_type application/octet-stream;sendfile on;keepalive_timeout 65; }[root@k8s-80 config]# nginx -t nginx: the configuration file /usr/local/nginx/conf/nginx.conf syntax is ok nginx: configuration file /usr/local/nginx/conf/nginx.conf test is successful [root@k8s-80 config]# nginx -s reload# 設置開機自啟 [root@k8s-80 config]# vim /etc/rc.local 添加一條命令 nginx [root@k8s-80 config]# chmod +x /etc/rc.d/rc.local# 瀏覽器驗證,輸入www.lin.com

  • 標簽

    第一種pod打標簽

    # 獲取pod信息,默認是default名稱空間,并查看附加信息【如:pod的IP及在哪個節點運行】[root@k8s-80 ~]# kubectl get po -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-b65884ff7-24f4j 1/1 Running 2 (142m ago) 6h10m 10.244.1.12 k8s-81 <none> <none> nginx-b65884ff7-8qss6 1/1 Running 2 (142m ago) 6h10m 10.244.2.15 k8s-82 <none> <none> nginx-b65884ff7-vhnbt 1/1 Running 2 (142m ago) 6h10m 10.244.1.13 k8s-81 <none> <none> [root@k8s-80 ~]# kubectl get po NAME READY STATUS RESTARTS AGE nginx-b65884ff7-24f4j 1/1 Running 2 (142m ago) 6h10m nginx-b65884ff7-8qss6 1/1 Running 2 (142m ago) 6h10m nginx-b65884ff7-vhnbt 1/1 Running 2 (142m ago) 6h10m# 查看所有pod的標簽 [root@k8s-80 ~]# kubectl get po --show-labels NAME READY STATUS RESTARTS AGE LABELS nginx-b65884ff7-24f4j 1/1 Running 2 (155m ago) 6h23m app=nginx,pod-template-hash=b65884ff7 nginx-b65884ff7-8qss6 1/1 Running 2 (155m ago) 6h23m app=nginx,pod-template-hash=b65884ff7 nginx-b65884ff7-vhnbt 1/1 Running 2 (155m ago) 6h23m app=nginx,pod-template-hash=b65884ff7# 打標簽 方法1:使用kubectl edit pod nginx-b65884ff7-24f4j 方法2: [root@k8s-80 ~]# kubectl label po nginx-b65884ff7-24f4j uplookingdev=shy pod/nginx-b65884ff7-24f4j labeled[root@k8s-80 ~]# kubectl get po --show-labels NAME READY STATUS RESTARTS AGE LABELS nginx-b65884ff7-24f4j 1/1 Running 2 (157m ago) 6h25m app=nginx,pod-template-hash=b65884ff7,uplookingdev=shy nginx-b65884ff7-8qss6 1/1 Running 2 (157m ago) 6h25m app=nginx,pod-template-hash=b65884ff7 nginx-b65884ff7-vhnbt 1/1 Running 2 (157m ago) 6h25m app=nginx,pod-template-hash=b65884ff7

    刪除標簽

    [root@k8s-80 ~]# kubectl label po nginx-b65884ff7-24f4j uplookingdev- pod/nginx-b65884ff7-24f4j labeled [root@k8s-80 ~]# kubectl get po --show-labels NAME READY STATUS RESTARTS AGE LABELS nginx-b65884ff7-24f4j 1/1 Running 2 (158m ago) 6h26m app=nginx,pod-template-hash=b65884ff7 nginx-b65884ff7-8qss6 1/1 Running 2 (158m ago) 6h26m app=nginx,pod-template-hash=b65884ff7 nginx-b65884ff7-vhnbt 1/1 Running 2 (158m ago) 6h26m app=nginx,pod-template-hash=b65884ff7

    第二種svc節點打標簽方式

    [root@k8s-master ~]# kubectl get svc --show-labels NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE LABELS kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 22h component=apiserver,provider=kubernetes [root@k8s-master ~]# kubectl label svc kubernetes today=happy service/kubernetes labeled [root@k8s-master ~]# kubectl get svc --show-labels NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE LABELS kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 22h component=apiserver,provider=kubernetes,today=happy

    第三種namespace節點打標簽方式

    [root@k8s-master ~]# kubectl label ns default today=happy namespace/default labeled [root@k8s-master ~]# kubectl get ns --show-labels NAME STATUS AGE LABELS default Active 22h kubernetes.io/metadata.name=default,today=happy ingress-nginx Active 14m app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,kubernetes.io/metadata.name=ingress-nginx kube-flannel Active 19h kubernetes.io/metadata.name=kube-flannel,pod-security.kubernetes.io/enforce=privileged kube-node-lease Active 22h kubernetes.io/metadata.name=kube-node-lease kube-public Active 22h kubernetes.io/metadata.name=kube-public kube-system Active 22h kubernetes.io/metadata.name=kube-system

    第四種node節點打標簽方式

    [root@k8s-80 ~]# kubectl get no NAME STATUS ROLES AGE VERSION k8s-80 Ready control-plane,master 6h52m v1.22.3 k8s-81 Ready <none> 6h52m v1.22.3 k8s-82 Ready <none> 6h52m v1.22.3 [root@k8s-80 ~]# kubectl label no k8s-81 node-role.kubernetes.io/php=true node/k8s-81 labeled [root@k8s-80 ~]# kubectl label no k8s-81 node-role.kubernetes.io/bus=true node/k8s-81 labeled [root@k8s-80 ~]# kubectl label no k8s-82 node-role.kubernetes.io/go=true node/k8s-82 labeled [root@k8s-80 ~]# kubectl label no k8s-82 node-role.kubernetes.io/bus=true node/k8s-82 labeled [root@k8s-80 ~]# kubectl get no NAME STATUS ROLES AGE VERSION k8s-80 Ready control-plane,master 6h56m v1.22.3 k8s-81 Ready bus,php 6h55m v1.22.3 k8s-82 Ready bus,go 6h55m v1.22.3 給節點打標簽設定某些pod運行在特定節點: kubectl label no $node node-role.kubernetes.io/$mark=true $node為節點名字 $mark為你要打的標簽 將某些特定的pod調度到某個標簽的節點時,在yaml文件里寫入標簽選擇: spec: (第二個spec)nodeSelector:node-role.kubernetes.io/$mark: "true"

    kubectl history

    # 查看之前推出的版本(歷史版本) [root@k8s-80 k8syaml]# kubectl rollout history deployment deployment.apps/nginx REVISION CHANGE-CAUSE 1 <none>[root@k8s-80 k8syaml]# kubectl rollout history deployment --revision=1 # 查看deployment修訂版1的詳細信息 deployment.apps/nginx with revision #1 Pod Template:Labels: app=nginxpod-template-hash=b65884ff7Containers:nginx:Image: registry.cn-shenzhen.aliyuncs.com/adif0028/nginx_php:74v3Port: 80/TCPHost Port: 0/TCPEnvironment: <none>Mounts: <none>Volumes: <none>

    回滾

    # 回滾前一rollout # undo [root@k8s-80 k8syaml]# kubectl rollout undo deployment nginx #默認回滾上一個版本# 回滾到指定版本 [root@k8s-80 k8syaml]# kubectl rollout undo deployment nginx --to-revision=版本號

    更新

    支持熱更新

    [root@k8s-80 k8syaml]# kubectl set image deployment nginx nginx=1.20.1 #需要倉庫有鏡像 deployment.apps/nginx image updated [root@k8s-80 k8syaml]# kubectl get po NAME READY STATUS RESTARTS AGE nginx-b65884ff7-8qss6 1/1 Running 2 (3h5m ago) 6h53m nginx-bc779cc7c-jxh87 0/1 ContainerCreating 0 9s[root@k8s-80 k8syaml]# kubectl edit deployment nginx # 進去看到版本號就是1.20.1# 修改成2個副本集 # 鏡像修改成nginx:1.20.1

    擴容

    當業務的用戶越來越多,目前的后端服務已經無法滿足業務要求當前的業務要求,傳統的解決辦法是將其橫向增加服務器,從而滿足我們的業務要求。K8S 中也是支持橫向擴容的方法的。

    [root@k8s-80 k8syaml]# kubectl get po NAME READY STATUS RESTARTS AGE nginx-58b9b8ff79-2nc2q 1/1 Running 0 4m59s nginx-58b9b8ff79-rzx5c 1/1 Running 0 4m14s[root@k8s-80 k8syaml]# kubectl scale deployment nginx --replicas=5 # 橫向擴容5個 deployment.apps/nginx scaled[root@k8s-80 k8syaml]# kubectl get po NAME READY STATUS RESTARTS AGE nginx-58b9b8ff79-2nc2q 1/1 Running 0 7m11s nginx-58b9b8ff79-f6g6x 1/1 Running 0 2s nginx-58b9b8ff79-m7n9b 1/1 Running 0 2s nginx-58b9b8ff79-rzx5c 1/1 Running 0 6m26s nginx-58b9b8ff79-s6qtx 1/1 Running 0 2s# 第二種擴容 Patch (少用) [root@k8s-80 k8syaml]# kubectl path deployment nginx -p '{"spec":{"replicas":6}}'

    縮容

    [root@k8s-80 k8syaml]# kubectl scale deployment nginx --replicas=3 deployment.apps/nginx scaled [root@k8s-80 k8syaml]# kubectl get po NAME READY STATUS RESTARTS AGE nginx-58b9b8ff79-2nc2q 1/1 Running 0 9m45s nginx-58b9b8ff79-m7n9b 1/1 Running 0 2m36s nginx-58b9b8ff79-rzx5c 1/1 Running 0 9m# 還可以比yaml文件里面設置的副本集數還要少,因為yaml文件里面的只是期望值并不是強制

    暫停部署(少用)

    [root@k8s-80 k8syaml]# kubectl rollout pause deployment nginx

    取消暫停,開始部署(少用)

    [root@k8s-80 k8syaml]# kubectl rollout resume deployment nginx

    服務探針(重點)

    對線上業務來說,保證服務的正常穩定是重中之重,對故障服務的及時處理避免影響業務以及快速恢復一直是開發運維的難點。Kubernetes 提供了健康檢查服務,對于檢測到故障服務會被及時自動下線,以及通過重啟服務的方式使服務自動恢復。

    探針傳送門

    存活性探測(LivenessProbe)

    用于判斷容器是否存活,即 Pod 是否為 running 狀態,如果 LivenessProbe 探針探測到容器不健康,則kubelet將 kill 掉容器,并根據容器的重啟策略判斷按照那種方式重啟,如果一個容器不包含LivenessProbe 探針,則Kubelet認為容器的 LivenessProbe 探針的返回值永遠成功。

    存活性探測支持的方法有三種:ExecAction,TCPSocketAction,HTTPGetAction。

    Exec(命令)

    這個穩定一點,一般選擇這個

    [root@k8s-80 k8syaml]# vim nginx.yaml apiVersion: apps/v1 kind: Deployment metadata:name: nginxlabels:app: nginx spec:replicas: 3selector:matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec:containers:- name: nginximage: nginx:1.18.0livenessProbe:exec:command:- cat - /opt/a.txt#也可以寫成:#- cat /opt/a.txt#也可以寫成腳本形式:#- /bin/sh #- -c#- touch /tmp/healthy; sleep 30; rm -f /tmp/healthy; sleep 600initialDelaySeconds: 5timeoutSeconds: 1ports:- containerPort: 80 # 肯定報錯,因為沒有/opt/a.txt這個文件 [root@k8s-80 k8syaml]# kubectl apply -f nginx.yaml deployment.apps/nginx configured service/nginx configured ingress.networking.k8s.io/nginx unchanged [root@k8s-80 k8syaml]# kubectl get po NAME READY STATUS RESTARTS AGE nginx-58b9b8ff79-rzx5c 1/1 Terminating 0 21m nginx-74cd54c6d8-6s9xq 1/1 Running 0 6s nginx-74cd54c6d8-8kk86 1/1 Running 0 5s nginx-74cd54c6d8-g6fx2 1/1 Running 0 3s[root@k8s-80 k8syaml]# kubectl describe po nginx-58b9b8ff79-rzx5c # 會看到提示說重啟[root@k8s-80 k8syaml]# kubectl get po # 可以看到RESTARTS的次數 NAME READY STATUS RESTARTS AGE nginx-74cd54c6d8-6s9xq 1/1 Running 2 (60s ago) 3m nginx-74cd54c6d8-8kk86 1/1 Running 2 (58s ago) 2m59s nginx-74cd54c6d8-g6fx2 1/1 Running 2 (57s ago) 2m57s

    使用正確的條件

    # 把 nginx.yaml中的/opt/a.txt# 修改成 /usr/share/nginx/html/index.html[root@k8s-80 k8syaml]# kubectl apply -f nginx.yaml deployment.apps/nginx configured service/nginx unchanged ingress.networking.k8s.io/nginx unchanged[root@k8s-80 k8syaml]# kubectl get po NAME READY STATUS RESTARTS AGE nginx-cf85cd887-cwv9t 1/1 Running 0 15s nginx-cf85cd887-m4krx 1/1 Running 0 13s nginx-cf85cd887-xgmhv 1/1 Running 0 12s[root@k8s-80 k8syaml]# kubectl describe po nginx-cf85cd887-cwv9t # 看詳細信息 [root@k8s-80 k8syaml]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7h54m nginx ClusterIP 10.105.11.148 <none> 80/TCP 7h28m[root@k8s-80 k8syaml]# kubectl get ing # 查看ingress NAME CLASS HOSTS ADDRESS PORTS AGE nginx nginx www.lin.com 192.168.188.81 80 7h22m# 192.168.188.81就是Ingress controller為了實現Ingress而分配的IP地址。RULE列表示所有發送給該IP的流量都被轉發到了BACKEND所列的Kubernetes service上[root@k8s-80 k8syaml]# kubectl get svc -n ingress-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx-controller NodePort 10.101.104.205 <none> 80:30799/TCP,443:31656/TCP 7h32m ingress-nginx-controller-admission ClusterIP 10.107.116.128 <none> 443/TCP# 此時瀏覽器打開正常訪問

    第二種使用條件

    # 把 nginx.yaml中的 - cat - /usr/local/nginx/html/index.html# 修改成 - /bin/sh - -c - nginx -t[root@k8s-80 k8syaml]# kubectl apply -f nginx.yaml[root@k8s-80 k8syaml]# kubectl get po NAME READY STATUS RESTARTS AGE nginx-fcf47cfcc-6txxf 1/1 Running 0 41s nginx-fcf47cfcc-fd5qc 1/1 Running 0 43s nginx-fcf47cfcc-x2qzg 1/1 Running 0 46s[root@k8s-80 k8syaml]# kubectl describe po nginx-fcf47cfcc-6txxf # 查看詳細信息# 瀏覽器打開頁面正常輸出# 如果不加就緒探針和健康檢查,有可能狀態是running但不能服務,所以一般兩種探針都加
    TCPSocket
    [root@k8s-80 k8syaml]# cp nginx.yaml f-tcpsocket.yaml [root@k8s-80 k8syaml]# vim f-tcpsocket.yaml apiVersion: apps/v1 kind: Deployment metadata:name: nginxlabels:app: nginx spec:replicas: 3selector:matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec:containers:- name: nginximage: nginx:1.18.0livenessProbe:tcpSocket:port: 80initialDelaySeconds: 5timeoutSeconds: 1ports:- containerPort: 80 [root@k8s-80 k8syaml]# kubectl apply -f f-tcpsocket.yaml [root@k8s-80 k8syaml]# kubectl get po NAME READY STATUS RESTARTS AGE nginx-5f48dd7bb9-6crdg 1/1 Running 0 48s nginx-5f48dd7bb9-t7q9k 1/1 Running 0 47s nginx-5f48dd7bb9-tcn2d 1/1 Running 0 49s [root@k8s-80 k8syaml]# kubectl describe po nginx-5f48dd7bb9-6crdg # 查看詳細信息無誤
    HTTPGet
    # 修改f-tcpsocket.yaml里面的livenessProbe內容# 修改成 livenessProbe:httpGet:path: /port: 80host: 127.0.0.1 #換算成http://127.0.0.1:80scheme: HTTP[root@k8s-80 k8syaml]# kubectl apply -f f-tcpsocket.yaml [root@k8s-80 k8syaml]# kubectl get po NAME READY STATUS RESTARTS AGE nginx-6795fb5766-2fj42 1/1 Running 0 38s nginx-6795fb5766-dj75j 1/1 Running 0 38s nginx-6795fb5766-wblxq 1/1 Running 0 36s

    就緒性探測

    用于判斷容器是否正常提供服務,即容器的 Ready 是否為 True,是否可以接收請求,如果 ReadinessProbe 探測失敗,則容器的 Ready 將設置為 False,控制器將此 Pod 的 Endpoint 從對應的 service 的 Endpoint 列表中移除, 從此不再將任何請求調度此 Pod 上,直到下次探測成功。(剔除此 pod,不參與接收請求不會將流量轉發給此 Pod)

    這里類型是與存活性探測是一樣的,但是參數不一樣;這里選擇監聽port方式好一點

    HTTPGet

    通過訪問某個 URL 的方式探測當前 POD 是否可以正常對外提供服務。

    # 寫一個會報錯的,實現手動回收資源 [root@k8s-80 k8syaml]# vim nginx.yaml # 在livenessProbe配置下添加readinessProbe;與livenessProbe同級 livenessProbe:exec:command:- /bin/sh- -c- nginx -tinitialDelaySeconds: 5timeoutSeconds: 1 readinessProbe:httpGet:port: 80path: /demo.html# 把異常的無法啟動的pod刪除 [root@k8s-80 k8syaml]# kubectl get po NAME READY STATUS RESTARTS AGE nginx-6795fb5766-2fj42 0/1 CrashLoopBackOff 12 (117s ago) 98m nginx-6795fb5766-dj75j 0/1 CrashLoopBackOff 12 (113s ago) 98m nginx-6795fb5766-wblxq 0/1 CrashLoopBackOff 12 (113s ago) 98m nginx-75b7449cdd-cjxq6 0/1 Running 0 44s# kubectl delete po NAME 手動資源回收 [root@k8s-80 k8syaml]# kubectl delete po nginx-6795fb5766-2fj42 pod "nginx-6795fb5766-2fj42" deleted [root@k8s-80 k8syaml]# kubectl delete po nginx-6795fb5766-dj75j pod "nginx-6795fb5766-dj75j" deleted [root@k8s-80 k8syaml]# kubectl delete po nginx-6795fb5766-wblxq pod "nginx-6795fb5766-wblxq" deleted [root@k8s-80 k8syaml]# kubectl get po NAME READY STATUS RESTARTS AGE nginx-6795fb5766-grb2w 1/1 Running 0 3s nginx-6795fb5766-r9ktt 1/1 Running 0 12s nginx-6795fb5766-z2hxg 1/1 Running 0 19s nginx-75b7449cdd-cjxq6 0/1 Running 1 (32s ago) 92s # 修改 readinessProbe:httpGet:port: 80path: /demo.html# 為 readinessProbe:httpGet:port: 80path: /index.html[root@k8s-80 k8syaml]# kubectl apply -f nginx.yaml [root@k8s-80 k8syaml]# kubectl get po NAME READY STATUS RESTARTS AGE nginx-d997687df-2mrvd 1/1 Running 1 (8s ago) 68s nginx-d997687df-gzw4p 1/1 Running 1 (9s ago) 69s nginx-d997687df-vk4vn 1/1 Running 1 (2s ago) 62s
    TCPSocket

    通過 ping 某個端口的方式,探測服務是否可以正常對外提供服務。

    # nginx.yaml# 修改 readinessProbe:httpGet:port: 80path: /index.html# 為 readinessProbe:tcpSocket:port: 80[root@k8s-80 k8syaml]# kubectl apply -f nginx.yaml[root@k8s-80 k8syaml]# kubectl get po NAME READY STATUS RESTARTS AGE nginx-68b7599f6-dhkvs 1/1 Running 0 32s nginx-68b7599f6-zdvtd 1/1 Running 0 34s nginx-68b7599f6-zlpjq 1/1 Running 0 33s

    更具體或者其他看Kubernetes 進階教程.pdf

    K8S 監控組件 metrics-server(重點)

    在未安裝metrics組件的時候,kubectl top node是無法使用的

    根據k8s版本選擇好metrics-server版本,有yaml可以下載;這里先下載上傳到服務器,查看需要什么鏡像而且無法拉取的,使用阿里云幫助拉取;拉取下來后因為這是測試環境,所以添加了“–kubelet-insecure-tls”這個配置,就不會去驗證Kubelets提供的服務證書的CA。但是僅用于測試,解釋如下圖

    添加位置如下

    # 查找需要什么鏡像 [root@k8s-80 k8syaml]# cat components.yaml | grep "image"image: k8s.gcr.io/metrics-server/metrics-server:v0.6.1imagePullPolicy: IfNotPresent# 在阿里云構建拉取替換components.yaml中鏡像拉取的地址[root@k8s-master ~]# docker pull registry.cn-guangzhou.aliyuncs.com/uplooking-class2/metrics:v0.6.1 [root@k8s-master ~]# docker tag registry.cn-guangzhou.aliyuncs.com/uplooking-class2/metrics:v0.6.1 k8s.gcr.io/metrics-server/metrics-server:v0.6.1注:如果拉取不了鏡像,可以指定鏡像路徑為阿里云倉庫 [root@k8s-master ~]# vim components.yamlcontainers:- args:- --cert-dir=/tmp- --secure-port=4443- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname- --kubelet-use-node-status-port- --metric-resolution=15s- --kubelet-insecure-tlsimage: registry.cn-guangzhou.aliyuncs.com/uplooking-class2/metrics:v0.6.1#image: k8s.gcr.io/metrics-server/metrics-server:v0.6.1imagePullPolicy: IfNotPresent[root@k8s-80 k8syaml]# kubectl apply -f components.yaml[root@k8s-80 k8syaml]# kubectl get po -n kube-system NAME READY STATUS RESTARTS AGE coredns-78fcd69978-d8cv5 1/1 Running 4 (128m ago) 10h coredns-78fcd69978-qp7f6 1/1 Running 4 (128m ago) 10h etcd-k8s-80 1/1 Running 4 (127m ago) 10h kube-apiserver-k8s-80 1/1 Running 4 (127m ago) 10h kube-controller-manager-k8s-80 1/1 Running 4 (127m ago) 10h kube-flannel-ds-88kmk 1/1 Running 4 (128m ago) 10h kube-flannel-ds-wfvst 1/1 Running 4 (128m ago) 10h kube-flannel-ds-wq2vz 1/1 Running 4 (127m ago) 10h kube-proxy-6t72l 1/1 Running 4 (128m ago) 10h kube-proxy-84vzc 1/1 Running 4 (127m ago) 10h kube-proxy-dllpx 1/1 Running 4 (128m ago) 10h kube-scheduler-k8s-80 1/1 Running 4 (127m ago) 10h metrics-server-6d54b97f-qwcqg 1/1 Running 2 (128m ago) 8h [root@k8s-80 k8syaml]# kubectl top node NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% k8s-80 155m 7% 924Mi 49% k8s-81 34m 3% 536Mi 28% k8s-82 46m 4% 515Mi 27%[root@k8s-80 k8syaml]# kubectl top pod NAME CPU(cores) MEMORY(bytes) nginx-68b7599f6-fs6rp 4m 147Mi nginx-68b7599f6-wzffm 7m 147Mi

    創建HPA

    官網傳送門

    在生產環境中,總會有一些意想不到的事情發生,比如公司網站流量突然升高,此時之前創建的 Pod 已不足 以撐住所有的訪問,而運維人員也不可能 24 小時守著業務服務,這時就可以通過配置 HPA,實現負載過高的情 況下自動擴容 Pod 副本數以分攤高并發的流量,當流量恢復正常后,HPA 會自動縮減 Pod 的數量。HPA 是根據 CPU 的使用率、內存使用率自動擴展 Pod 數量的,所以要使用 HPA 就必須定義 Requests 參數

    [root@k8s-80 k8syaml]# vi nginx.yaml # 添加以下內容--- apiVersion: autoscaling/v2beta1 apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata:name: nginx spec:scaleTargetRef:apiVersion: apps/v1name: nginxkind: DeploymentminReplicas: 2maxReplicas: 10metrics:- type: Resourceresource:name: cputargetAverageUtilization: 10# 添加內容二 #與livenessProbe同級 resources:limits:cpu: 200mmemory: 500Mirequests:cpu: 200mmemory: 500Mi# 生產中至少要2個pod保證高可用 # 但是生產上建議設置一致防止出現問題Deployment的副本集是個期望值,hpa的副本集是范圍值;后面配置我都改成3個保持一致 [root@k8s-80 k8syaml]# kubectl explain hpa # 查看apiVersion[root@k8s-80 k8syaml]# kubectl delete -f nginx.yaml [root@k8s-80 k8syaml]# kubectl apply -f nginx.yaml[root@k8s-80 k8syaml]# kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE nginx Deployment/nginx 0%/10% 2 10 3 3m44s[root@k8s-master ~]# kubectl get pod NAME READY STATUS RESTARTS AGE nginx-cb994dfb7-9lxlw 0/1 Pending 0 4s nginx-cb994dfb7-t6zjv 0/1 Pending 0 4s nginx-cb994dfb7-tdhpb 1/1 Running 0 4s[root@k8s-master ~]# kubectl describe pod nginx-cb994dfb7-9lxlw Events:Type Reason Age From Message---- ------ ---- ---- -------Warning FailedScheduling 38s default-scheduler 0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient memory.注:此報錯主要是由于requests數值太多了,導致內存不足,可以將某個些對象的requests的值換成一個更小的數 [root@k8s-master ~]# vim nginx.yaml resources:limits:cpu: 200mmemory: 200Mirequests:cpu: 200mmemory: 200Mi[root@k8s-80 k8syaml]# kubectl get po NAME READY STATUS RESTARTS AGE nginx-77f85b76b9-f66ll 1/1 Running 0 3m55s nginx-77f85b76b9-hknv9 1/1 Running 0 3m55s nginx-77f85b76b9-n54pp 1/1 Running 0 3m55s# hpa與副本集的數量,誰多誰說了算

    修改后的nginx.yaml詳細

    nginx.yaml apiVersion: v1 kind: Service metadata:name: nginx spec:#type: NodePortselector:app: nginxports:- name: httpprotocol: TCPport: 80targetPort: 80 --- apiVersion: apps/v1 kind: Deployment metadata:name: nginxlabels:app: nginx spec:replicas: 3selector:matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec:containers:- name: nginximage: nginx:1.18.0livenessProbe:tcpSocket:port: 80initialDelaySeconds: 5timeoutSeconds: 1readinessProbe:httpGet:port: 80path: /index.htmlresources:limits:cpu: 200mmemory: 200Mirequests:cpu: 200mmemory: 200Miports:- containerPort: 80 --- apiVersion: autoscaling/v2beta1 #apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata:name: nginx spec:scaleTargetRef:apiVersion: apps/v1name: nginxkind: DeploymentminReplicas: 2maxReplicas: 10metrics:- type: Resourceresource:name: cputargetAverageUtilization: 10

    內網壓測

    [root@k8s-80 k8syaml]# kubectl delete -f cname.yaml [root@k8s-80 k8syaml]# kubectl delete -f f-tcpsocket.yaml [root@k8s-80 k8syaml]# kubectl get po[root@k8s-80 k8syaml]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 19h nginx ClusterIP 10.98.161.97 <none> 80/TCP 4m15s[root@k8s-80 k8syaml]# curl 10.98.161.97 # 會有頁面源碼返回 [root@k8s-80 k8syaml]# yum install -y httpd-tools [root@k8s-80 k8syaml]# ab -c 1000 -n 200000 http://10.98.161.97/# 另開一個終端動態觀看 [root@k8s-80 k8syaml]# watch "kubectl get po"或者[root@k8s-80 k8syaml]# kubectl get pods -w

    導出配置成yaml

    [root@k8s-80 k8syaml]# kubectl get po -n kube-system NAME READY STATUS RESTARTS AGE coredns-78fcd69978-d8cv5 1/1 Running 4 (132m ago) 10h coredns-78fcd69978-qp7f6 1/1 Running 4 (132m ago) 10h etcd-k8s-80 1/1 Running 4 (132m ago) 10h kube-apiserver-k8s-80 1/1 Running 4 (132m ago) 10h kube-controller-manager-k8s-80 1/1 Running 4 (132m ago) 10h kube-flannel-ds-88kmk 1/1 Running 4 (132m ago) 10h kube-flannel-ds-wfvst 1/1 Running 4 (132m ago) 10h kube-flannel-ds-wq2vz 1/1 Running 4 (132m ago) 10h kube-proxy-6t72l 1/1 Running 4 (132m ago) 10h kube-proxy-84vzc 1/1 Running 4 (132m ago) 10h kube-proxy-dllpx 1/1 Running 4 (132m ago) 10h kube-scheduler-k8s-80 1/1 Running 4 (132m ago) 10h metrics-server-6d54b97f-qwcqg 1/1 Running 2 (132m ago) 8h [root@k8s-80 k8syaml]# kubectl get deployment -o yaml >> test.yaml [root@k8s-80 k8syaml]# kubectl get svc -o yaml >> test.yaml [root@k8s-80 k8syaml]# kubectl get ing -o yaml >> test.yaml

    創建StatefulSets

    StatefulSet是為了解決有狀態服務的問題(對應Deployments和ReplicaSets是為無狀態服務而設計),其應用場景包括

    • 穩定的持久化存儲,即Pod重新調度后還是能訪問到相同的持久化數據,基于PVC來實現
    • 穩定的網絡標志,即Pod重新調度后其PodName和HostName不變,基于Headless Service(即沒有Cluster IP的Service)來實現
    • 有序部署,有序擴展,即Pod是有順序的,在部署或者擴展的時候要依據定義的順序依次依次進行(即從0到N-1,在下一個Pod運行之前所有之前的Pod必須都是Running和Ready狀態),基于init containers來實現
    • 有序收縮,有序刪除(即從N-1到0)

    從上面的應用場景可以發現,StatefulSet由以下幾個部分組成:

    • 用于定義網絡標志(DNS domain)的Headless Service
    • 用于創建PersistentVolumes的volumeClaimTemplates
    • 定義具體應用的StatefulSet
    [root@k8s-80 k8syaml]# cp nginx.yaml mysql.yaml [root@k8s-80 k8syaml]# vim mysql.yaml apiVersion: v1 kind: Service metadata:name: mysql spec:type: NodePortselector:app: mysqlports:- name: httpprotocol: TCPport: 3306targetPort: 3306nodePort: 30005 --- apiVersion: apps/v1 kind: StatefulSet metadata:name: mysqllabels:app: mysql spec:serviceName: "mysql"replicas: 1selector:matchLabels:app: mysqltemplate:metadata:labels:app: mysqlspec:containers:- name: mysqlimage: mysql:5.7env:- name: MYSQL_ROOT_PASSWORDvalue: "123456" [root@k8s-80 k8syaml]# kubectl apply -f mysql.yaml[root@k8s-80 k8syaml]# kubectl get po[root@k8s-80 k8syaml]# kubectl describe po mysql-0[root@k8s-80 k8syaml]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 43h mysql ClusterIP 10.103.12.104 <none> 3306/TCP 60s

    項目打包進k8s

    模擬k8s的cicd;使用wordpress

    1.編寫Dockerfile

    # 上傳wordpress-5.9.2-zh_CN.tar.gz到/k8syaml下[root@k8s-80 k8syaml]# unzip wordpress-5.9.2-zh_CN.tar.gz[root@k8s-80 k8syaml]# vim Dockerfile FROM registry.cn-shenzhen.aliyuncs.com/adif0028/nginx_php:74v3COPY wordpress /usr/local/nginx/html[root@k8s-80 k8syaml]# docker build -t wordpress:v1.0 .

    2.推送上倉庫(這里選擇阿里云)

    # 登陸 docker login --username=lzzdd123 registry.cn-guangzhou.aliyuncs.com# 打標簽 # docker tag [ImageId] registry.cn-guangzhou.aliyuncs.com/uplooking-class2/lnmp-wordpress:[鏡像版本號][root@k8s-80 k8syaml]# docker images | grep wordpress [root@k8s-80 k8syaml]# docker tag 9e0b6b120f79 registry.cn-guangzhou.aliyuncs.com/uplooking-class2/lnmp-wordpress:# 推送 [root@k8s-80 k8syaml]# docker push registry.cn-guangzhou.aliyuncs.com/uplooking-class2/lnmp-wordpress:[鏡像版本號]

    3.修改nginx.yaml

    更換成nginx鏡像,更換服務名等;需要注意ingressClassName必須設置成nginx

    apiVersion: apps/v1 kind: Deployment metadata:name: weblabels:app: web spec:replicas: 1selector:matchLabels:app: webtemplate:metadata:labels:app: webspec:containers:- name: webimage: registry.cn-guangzhou.aliyuncs.com/uplooking-class2/lnmp-wordpress:v1.0 livenessProbe:exec:command:- /bin/sh- -c- nginx -tinitialDelaySeconds: 5timeoutSeconds: 1readinessProbe:tcpSocket:port: 80ports:- containerPort: 80resources:limits:cpu: 200mmemory: 200Mirequests:cpu: 200mmemory: 200Mi --- apiVersion: autoscaling/v2beta1 apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata:name: web spec:scaleTargetRef:apiVersion: apps/v1name: webkind: DeploymentminReplicas: 1maxReplicas: 10metrics:- type: Resourceresource:name: cputargetAverageUtilization: 100 --- apiVersion: v1 kind: Service metadata:name: web spec:type: NodePortselector:app: webports:- name: httpprotocol: TCPport: 80targetPort: 80nodePort: 30005 #--- #apiVersion: networking.k8s.io/v1 #kind: Ingress #metadata: # name: web # annotations: # nginx.ingress.kubernetes.io/rewrite-target: / #spec: # ingressClassName: nginx # rules: # - host: www.lin.com # http: # paths: # - path: / # pathType: Prefix # backend: # service: # name: web # port: # number: 80

    4.檢查ing

    檢查ing防止與新修改的nginx.yaml文件配置沖突,也就是kind: Ingress metadata下的
    name: web不能與已有的相同

    [root@k8s-80 k8syaml]# kubectl get ing

    5.創建并運行

    [root@k8s-80 k8syaml]# kubectl apply -f nginx.yaml[root@k8s-80 k8syaml]# kubectl get po NAME READY STATUS RESTARTS AGE mysql-0 1/1 Running 2 (3m47s ago) 3h34m web-69f7d585c9-5tjz8 1/1 Running 1 (3m39s ago) 5m50s

    6.進入容器修改

    [root@k8s-80 k8syaml]# kubectl exec -it web-69f7d585c9-5tjz8 -- bash [root@web-69f7d585c9-5tjz8 /]# cd /usr/local/nginx/conf/ [root@web-69f7d585c9-5tjz8 conf]# vi nginx.conf # 改:root /usr/local/nginx/html; # 為:root /data[root@web-69f7d585c9-5tjz8 conf]# nginx -s reload [root@web-69f7d585c9-5tjz8 conf]# vi /usr/local/php/etc/php.ini # 改:;session.save_handler = files;session.save_path = "/tmp"# 為:session.save_handler = files session.save_path = "/tmp"[root@web-69f7d585c9-5tjz8 conf]# /etc/init.d/php-fpm restart

    7.瀏覽器操作

    [root@k8s-80 k8syaml]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d1h mysql ClusterIP 10.103.12.104 <none> 3306/TCP 5h40m web ClusterIP 10.99.30.175 <none> 80/TCP 131m

    節點選擇器

    官方傳送門

    [root@k8s-80 k8syaml]# kubectl get no NAME STATUS ROLES AGE VERSION k8s-80 Ready control-plane,master 2d2h v1.22.3 k8s-81 Ready bus,php 2d2h v1.22.3 k8s-82 Ready bus,go 2d2h v1.22.3[root@k8s-80 k8syaml]# kubectl get po -o wide # 可以看到服務都落在k8s-81 NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES mysql-0 1/1 Running 2 (3h14m ago) 6h45m 10.244.1.155 k8s-81 <none> <none> web-69f7d585c9-5tjz8 1/1 Running 1 (3h14m ago) 3h16m 10.244.1.157 k8s-81 <none> <none> [root@k8s-80 k8syaml]#

    定向調度

    # 修改nginx.yaml,在containers:容器屬性以上寫該段,與containers同級 # 其中node-role.kubernetes.io/php: "true"這個可以通過kubectl describe no k8s-81來看到 spec:nodeSelector:node-role.kubernetes.io/go: "true"containers:# 重新部署一下 [root@k8s-80 k8syaml]# kubectl apply -f nginx.yaml# 驗證 [root@k8s-80 k8syaml]# kubectl get po NAME READY STATUS RESTARTS AGE mysql-0 1/1 Running 2 (3h25m ago) 6h56m web-7469846f96-77j2g 1/1 Running 0 76s [root@k8s-80 k8syaml]# kubectl get po -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES mysql-0 1/1 Running 2 (3h25m ago) 6h56m 10.244.1.155 k8s-81 <none> <none> web-7469846f96-77j2g 1/1 Running 0 78s 10.244.2.108 k8s-82 <none>

    多節點分散調度

    這里使用的是硬親和(硬約束)

    # 修改nginx.yaml,在containers:容器屬性以上寫該段;并且修改標簽 # 副本集與資源限制最小副本集一致改為2,后面說更改副本集都是兩處都改spec:affinity:podAntiAffinity:requiredDuringSchedulingIgnoredDuringExecution:- labelSelector:matchExpressions:- key : appoperator: Invalues:- webtopologyKey: kubernetes.io/hostnamenodeSelector:node-role.kubernetes.io/bus: "true" [root@k8s-80 k8syaml]# kubectl apply -f nginx.yaml [root@k8s-80 k8syaml]# kubectl get po NAME READY STATUS RESTARTS AGE mysql-0 1/1 Running 2 (3h46m ago) 7h16m web-7469846f96-77j2g 1/1 Running 0 21m web-7469846f96-vqxkq 1/1 Running 0 4m2s web-7f94dd4894-f7glb 0/1 Pending 0 4m2s[root@k8s-80 k8syaml]# kubectl get po -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES mysql-0 1/1 Running 2 (3h46m ago) 7h17m 10.244.1.155 k8s-81 <none> <none> web-7469846f96-77j2g 1/1 Running 0 22m 10.244.2.108 k8s-82 <none> <none> web-7469846f96-vqxkq 1/1 Running 0 4m49s 10.244.2.109 k8s-82 <none> <none> web-7f94dd4894-f7glb 0/1 Pending 0 4m49s <none> <none> <none> <none># 會發現有一個處于Pending狀態,而且無法調度;因為兩個節點上都有資源,已經存在的情況下是調度不上去的,會自動判斷是否存在# 先把副本集改為1部署,然后會有一個變成Terminating狀態無法刪除也無法調度,需要刪除一個讓機制實現;副本集再改2# 刪除后 [root@k8s-80 k8syaml]# kubectl get po -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES mysql-0 1/1 Running 2 (3h56m ago) 7h27m 10.244.1.155 k8s-81 <none> <none> web-7f94dd4894-jmt5p 1/1 Running 0 82s 10.244.2.110 k8s-82 <none> <none>

    遇到自動添加污點

    官方傳送門

    #使用這條命令查看或者kubectl get no -o yaml | grep taint -A 5[root@k8s-80 k8syaml]# kubectl describe node |grep Taint Taints: node-role.kubernetes.io/master:NoSchedule Taints: <none> Taints: <none> [root@k8s-80 k8syaml]# kubectl describe node k8s-81 |grep Taint Taints: <none> [root@k8s-80 k8syaml]# kubectl describe node k8s-80 |grep Taint # 得出80被打上污點 Taints: node-role.kubernetes.io/master:NoSchedule

    解決方法

    #去除污點NoSchedule,最后一個"-"代表刪除 [root@k8s-80 k8syaml]# kubectl taint nodes k8s-80 node-role.kubernetes.io/master:NoSchedule- node/k8s-80 untainted# 驗證 [root@k8s-80 k8syaml]# kubectl describe node |grep Taint Taints: <none> Taints: <none> Taints: <none>

    硬親和調度效果

    root@k8s-80 k8syaml]# kubectl get po -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES mysql-0 1/1 Running 2 (4h28m ago) 7h59m 10.244.1.155 k8s-81 <none> <none> web-c94cbcc58-6q9d9 1/1 Running 0 4s 10.244.2.112 k8s-82 <none> <none> web-c94cbcc58-nc449 1/1 Running 0 4s 10.244.1.159 k8s-81 <none> <none>

    數據持久化

    Pod 是由容器組成的,而容器宕機或停止之后,數據就隨之丟了,那么這也就意味著我們在做 Kubernetes 集群的時候就不得不考慮存儲的問題,而存儲卷就是為了 Pod 保存數據而生的。存儲卷的類型有很多, 我們常用到一般有四種:emptyDir,hostPath,NFS 以及云存儲(ceph, glasterfs…)等

    emptyDir

    emptyDir 類型的 volume 在 pod 分配到 node 上時被創建,kubernetes 會在 node 上自動分配 一個目錄,因 此無需指定宿主機 node 上對應的目錄文件。這個目錄的初始內容為空,當 Pod 從 node 上移除時,emptyDir 中 的數據會被永久刪除。emptyDir Volume 主要用于某些應用程序無需永久保存的臨時目錄。

    apiVersion: v1 kind: Pod metadata:name: test-pd spec:containers:- image: registry.k8s.io/test-webservername: test-containervolumeMounts:- mountPath: /cachename: cache-volumevolumes:- name: cache-volumeemptyDir:sizeLimit: 500Mi

    hostPath (相當于docker的掛載)

    hostPath 類型則是映射 node 文件系統中的文件或者目錄到 pod 里。在使用 hostPath 類型的存儲卷時,也可 以設置 type 字段,支持的類型有文件、目錄、File、Socket、CharDevice 和 BlockDevice。

    [root@k8s-80 k8syaml]# kubectl delete -f mysql.yaml# 修改mysql.yaml文件,在containers下添加(volumeMounts與env同級,volumes與containers同級)volumeMounts:- name: datamountPath: /var/lib/mysqlvolumes:- name: datahostPath:path: /data/mysqldatatype: DirectoryOrCreate [root@k8s-80 k8syaml]# kubectl apply -f mysql.yaml# 查看落在哪個節點上 [root@k8s-80 k8syaml]# kubectl get po -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES mysql-0 1/1 Running 0 28s 10.244.1.160 k8s-81 <none> <none> web-c94cbcc58-6q9d9 1/1 Running 0 29m 10.244.2.112 k8s-82 <none> <none> web-c94cbcc58-nc449 1/1 Running 0 29m 10.244.1.159 k8s-81 <none> <none># 在k8s-81上驗證是否創建文件 [root@k8s-81 ~]# ls /data/ mysqldata

    Nfs掛載

    nfs 使得我們可以掛載已經存在的共享到我們的 Pod 中,和 emptyDir 不同的是,當 Pod 被刪除時,emptyDir 也會被刪除。但是 nfs 不會被刪除,僅僅是解除掛在狀態而已,這就意味著 NFS 能夠允許我們提前對數據進行處 理,而且這些數據可以在 Pod 之間相互傳遞,并且 nfs 可以同時被多個 pod 掛在并進行讀寫。

    主機名IP備注
    nfs192.168.188.17nfs
  • 在所有節點上安裝 nfs

    yum install nfs-utils.x86_64 rpcbind -ysystemctl start rpcbind nfs-utils.servicesystemctl enable nfs-utils rpcbind
  • 配置 nfs

    [root@nfs ~]# mkdir -p /data/v{1..5}[root@nfs ~]# cat > /etc/exports <<EOF /data/v1 192.168.245.*(rw,no_root_squash) /data/v2 192.168.245.*(rw,no_root_squash) /data/v3 192.168.245.*(rw,no_root_squash) /data/v4 192.168.245.*(rw,no_root_squash) /data/v5 192.168.245.*(rw,no_root_squash) EOF[root@nfs ~]# exportfs -arv exporting 192.168.245.*:/data/v5 exporting 192.168.245.*:/data/v4 exporting 192.168.245.*:/data/v3 exporting 192.168.245.*:/data/v2 exporting 192.168.245.*:/data/v1[root@nfs ~]# showmount -e Export list for nfs: /data/v5 192.168.245.* /data/v4 192.168.245.* /data/v3 192.168.245.* /data/v2 192.168.245.* /data/v1 192.168.245.*
  • 創建 POD 使用 Nfs

    # 修改nginx.yaml文件,副本集改為1,增加volumeMounts、volumes - containerPort: 80volumeMounts:- mountPath: /usr/local/nginx/html/name: nfsvolumes:- name: nfsnfs:path: /data/v1server: 192.168.188.17 [root@k8s-80 k8syaml]# kubectl apply -f nginx.yaml[root@k8s-80 k8syaml]# kubectl get po NAME READY STATUS RESTARTS AGE mysql-0 1/1 Running 0 52m web-5d689d77d6-zwfnq 1/1 Running 0 93s# 可以看到掛載信息 [root@k8s-80 k8syaml]# kubectl describe po web-5d689d77d6-zwfnq# 在nfs服務器上操作 [root@nfs ~]# echo "this is 17" > /data/v1/index.html# 瀏覽器打開http://192.168.245.210:30005/
  • PV 和 PVC

    官方傳送門

    PersistentVolume(PV)是集群中已由管理員配置的一段網絡存儲。 集群中的資源就像一個節點是一個集群資源。 PV 是諸如卷之類的卷插件,但是具有獨立于使用 PV 的任何單個 pod 的生命周期。 該 API 對象捕獲存儲 的實現細節,即 NFS,iSCSI 或云提供商特定的存儲系統(ceph)。 PersistentVolumeClaim(PVC)是用戶存儲的請求。PVC 的使用邏輯:在 pod 中定義一個存儲卷(該存儲卷類 型為 PVC),定義的時候直接指定大小,pvc 必須與對應的 pv 建立關系,pvc 會根據定義去 pv 申請,而 pv 是由 存儲空間創建出來的。pv 和 pvc 是 kubernetes 抽象出來的一種存儲資源。

    PV 的訪問模式(accessModes)

    模式解釋
    ReadWriteOnce(RWO)可讀可寫,但只支持被單個節點掛載
    ReadOnlyMany(ROX)只讀,可以被多個節點掛載
    ReadWriteMany(RWX)多路可讀可寫。這種存儲可以以讀寫的方式被多個節點共享。不是每一種存儲都支 持這三種方式,像共享方式,目前支持的還比較少,比較常用的是 NFS。在 PVC 綁 定 PV 時通常根據兩個條件來綁定,一個是存儲的大小,另一個就是訪問模式

    PV 的回收策略(persistentVolumeReclaimPolicy)

    策略解釋
    Retain不清理, 保留 Volume(需要手動清理)
    Recycle刪除數據,即 rm -rf /thevolume/*(只有 NFS 和 HostPath 支持)
    Delete刪除存儲資源,比如刪除 AWS EBS 卷(只有 AWS EBS, GCE PD, Azure Disk

    PV 的狀態

    狀態解釋
    Available可用
    Bound已經分配給 PVC
    ReleasedPVC 解綁但還未執行回收策略
    Failed發生錯誤

    創建PV

    [root@k8s-80 k8syaml]# vim pv.yaml apiVersion: v1 kind: PersistentVolume metadata:name: pv001labels:app: pv001 spec:nfs:path: /data/v2server: 192.168.188.17accessModes:- "ReadWriteMany"- "ReadWriteOnce"capacity:storage: 5Gi --- apiVersion: v1 kind: PersistentVolume metadata:name: pv002labels:app: pv002 spec:nfs:path: /data/v3server: 192.168.188.17accessModes:- "ReadWriteMany"- "ReadWriteOnce"capacity:storage: 10Gi --- apiVersion: v1 kind: PersistentVolume metadata:name: pv003labels:app: pv003 spec:nfs:path: /data/v4server: 192.168.188.17accessModes:- "ReadWriteMany"- "ReadWriteOnce"capacity:storage: 15Gi --- apiVersion: v1 kind: PersistentVolume metadata:name: pv004labels:app: pv004 spec:nfs:path: /data/v5server: 192.168.188.17accessModes:- "ReadWriteMany"- "ReadWriteOnce"capacity:storage: 20Gi [root@k8s-80 k8syaml]# kubectl apply -f pv.yaml persistentvolume/pv001 created persistentvolume/pv002 created persistentvolume/pv003 created persistentvolume/pv004 created

    查看PV

    # 查看PV [root@k8s-80 k8syaml]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pv001 5Gi RWO,RWX Retain Available 26s pv002 10Gi RWO,RWX Retain Available 26s pv003 15Gi RWO,RWX Retain Available 26s pv004 20Gi RWO,RWX Retain Available 26s

    使用PVC

    [root@k8s-80 k8syaml]# vim pvc.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata:name: pvc spec:accessModes:- "ReadWriteMany"resources:requests:storage: "12Gi" [root@k8s-80 k8syaml]# kubectl apply -f pvc.yaml persistentvolumeclaim/pvc created# 查看 [root@k8s-80 k8syaml]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE pvc Bound pv003 15Gi RWO,RWX 21s # 修改nginx.yamlresources:limits:cpu: 200mmemory: 200Mirequests:cpu: 200mmemory: 200MivolumeMounts:- mountPath: /usr/local/nginx/html/name: shyvolumes:- name: shypersistentVolumeClaim:claimName: pvc [root@k8s-80 k8syaml]# kubectl apply -f nginx.yaml# 查看與驗證 [root@k8s-80 k8syaml]# kubectl get po NAME READY STATUS RESTARTS AGE mysql-0 1/1 Running 0 102m web-5c49d99965-qsf7r 1/1 Running 0 41s[root@k8s-80 k8syaml]# kubectl describe po web-5c49d99965-qsf7r[root@k8s-80 k8syaml]# kubectl get pv # 已經綁定成功 NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pv001 5Gi RWO,RWX Retain Available 38m pv002 10Gi RWO,RWX Retain Available 38m pv003 15Gi RWO,RWX Retain Bound default/pvc 38m pv004 20Gi RWO,RWX Retain Available 38m # nfs服務器 [root@nfs ~]# echo "iPhone 14 Pro Max" > /data/v4/index.html# 瀏覽器驗證

    StorageClass

    注:適用于大公司集群

    在一個大規模的Kubernetes集群里,可能有成千上萬個 PVC,這就意味著運維人員必須實現創建出這個多個PV, 此外,隨著項目的需要,會有新的 PVC 不斷被提交,那么運維人員就需要不斷的添加新的,滿足要求的 PV,否則新的 Pod 就會因為 PVC 綁定不到 PV 而導致創建失敗。而且通過 PVC 請求到一定的存儲空間也很有可能不足以滿足應 用對于存儲設備的各種需求,而且不同的應用程序對于存儲性能的要求可能也不盡相同,比如讀寫速度、并發性 能等,為了解決這一問題,Kubernetes 又為我們引入了一個新的資源對象:StorageClass,通過 StorageClass 的 定義,管理員可以將存儲資源定義為某種類型的資源,比如快速存儲、慢速存儲等,kubernetes 根據 StorageClass 的描述就可以非常直觀的知道各種存儲資源的具體特性了,這樣就可以根據應用的特性去申請合適的存儲資源 了。

    configmap

    官方傳送門

    在生產環境中經常會遇到需要修改配置文件的情況,傳統的修改方式不僅會影響到服務的正常運行,而且操 作步驟也很繁瑣。為了解決這個問題,kubernetes 項目從 1.2 版本引入了 ConfigMap 功能,用于將應用的配置信 息與程序的分離。這種方式不僅可以實現應用程序被的復用,而且還可以通過不同的配置實現更靈活的功能。在 創建容器時,用戶可以將應用程序打包為容器鏡像后,通過環境變量或者外接掛載文件的方式進行配置注入。 ConfigMap && Secret 是 K8S 中的針對應用的配置中心,它有效的解決了應用掛載的問題,并且支持加密以及熱 更新等功能,可以說是一個 k8s 提供的一件非常好用的功能。

    一個重要的需求就是應用的配置管理、敏感信息的存儲和使用(如:密碼、Token 等)、容器運行資源的配置、安全管控、身份認證等等。

    對于應用的可變配置在 Kubernetes 中是通過一個 ConfigMap 資源對象來實現的,我們知道許多應用經常會有從配置文件、命令行參數或者環境變量中讀取一些配置信息的需求,這些配置信息我們肯定不會直接寫死到應用程序中去的,比如你一個應用連接一個 mysql 服務,下一次想更換一個了的,還得重新去修改代碼,重新制作一個鏡像,這肯定是不可取的,而**ConfigMap 就給我們提供了向容器中注入配置信息的能力,不僅可以用來保存單個屬性,還可以用來保存整個配置文件,比如我們可以用來配置一個 mysql 服務的訪問地址,也可以用來保存整個 mysql 的配置文件。**接下來我們就來了解下 ConfigMap 這種資源對象的使用方法。

    [root@k8s-80 k8syaml]# kubectl get configmap # kube-root-ca.crt存的是密鑰,可縮寫為cm NAME DATA AGE kube-root-ca.crt 1 2d6h[root@k8s-80 k8syaml]# kubectl get configmap -n kube-system # 基本都是密鑰 NAME DATA AGE coredns 1 2d6h extension-apiserver-authentication 6 2d6h kube-flannel-cfg 2 2d6h kube-proxy 2 2d6h kube-root-ca.crt 1 2d6h kubeadm-config 1 2d6h kubelet-config-1.22 1 2d6h

    現在數據庫密碼是以環境變量的形式掛進去,不想明文顯示怎么辦?

    [root@k8s-80 k8syaml]# kubectl describe po mysql-0 # 能在里面找到密碼# 進入容器也能找到 [root@k8s-80 k8syaml]# kubectl exec -it mysql-0 -- bash root@mysql-0:/# env

    解決方法

    [root@k8s-80 k8syaml]# vim cm-mysql.yaml apiVersion: v1 kind: ConfigMap metadata:name: maplabels:app: map data:MYSQL_ROOT_PASSWORD: "123456" [root@k8s-80 k8syaml]# kubectl apply -f cm-mysql.yaml configmap/conf created [root@k8s-80 k8syaml]# kubectl get cm NAME DATA AGE conf 1 9s kube-root-ca.crt 1 2d6h # 引用,修改mysql.yaml;修改如下把原來的env改成envFromimage: mysql:5.7 envFrom: - configMapRef:name: map [root@k8s-80 k8syaml]# kubectl apply -f mysql.yaml service/mysql unchanged statefulset.apps/mysql configured[root@k8s-80 k8syaml]# kubectl get po NAME READY STATUS RESTARTS AGE mysql-0 1/1 Running 0 8s web-5c49d99965-qsf7r 1/1 Running 0 22m # 驗證 [root@k8s-80 k8syaml]# kubectl apply -f mysql.yaml # 不再是顯示密碼,如圖

    # 但是進入容器輸入env還是能看到 [root@k8s-80 k8syaml]# kubectl exec -it mysql-0 -- bash root@mysql-0:/# env# kubectl獲取詳細信息也能看到 [root@k8s-80 k8syaml]# kubectl describe cm conf Name: conf Namespace: default Labels: app=conf Annotations: <none>Data ==== MYSQL_ROOT_PASSWORD: ---- 123456BinaryData ====Events: <none>

    使用

    ConfigMap 創建成功了,那么我們應該怎么在 Pod 中來使用呢?我們說 ConfigMap 這些配置數據可以通過很多種方式在 Pod 里使用,主要有以下幾種方式:

    • 設置環境變量的值
    • 在容器里設置命令行參數
    • 在數據卷里面掛載配置文件

    掛載的方式使用 configma

    [root@k8s-80 ~]# cd /k8syaml/ [root@k8s-80 k8syaml]# cp nginx.yaml cm-nginx.yaml [root@k8s-80 k8syaml]# vi cm-nginx.yaml apiVersion: v1 kind: Service metadata:name: cm-nginx spec:#type: NodePortselector:app: cm-nginxports:- name: httpprotocol: TCPport: 80targetPort: 80#nodePort: 30005 --- kind: ConfigMap apiVersion: v1 metadata:name: cm-nginxlabels:app: cm-nginx data:key: valuenginx.conf: |-upstream web {server 192.168.75.110;}server {listen 80;server_name www.dyz.com;location / {proxy_pass http://web;index index.htm index.html;proxy_set_header Host $host;proxy_set_header X-Real-IP $remote_addr;}} --- apiVersion: apps/v1 kind: Deployment metadata:name: cm-nginxlabels:app: cm-nginx spec:replicas: 1selector:matchLabels:app: cm-nginxtemplate:metadata:labels:app: cm-nginxspec:affinity:podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution:- labelSelector: matchExpressions:- key : appoperator: Invalues:- cm-nginxtopologyKey: kubernetes.io/hostnamenodeSelector:node-role.kubernetes.io/bus: "true"containers:- name: cm-nginximage: nginx:1.20.1livenessProbe:exec:command:- /bin/sh- -c- nginx -tinitialDelaySeconds: 5timeoutSeconds: 1readinessProbe:tcpSocket:port: 80ports:- containerPort: 80resources:limits:cpu: 500mmemory: 1000Mirequests:cpu: 200mmemory: 300MivolumeMounts:- mountPath: /etc/nginx/upstreamname: kgevolumes:- name: kgeconfigMap:name: cm-nginxitems:- key: "nginx.conf"path: "nginx.conf" --- apiVersion: autoscaling/v2beta1 #apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata:name: cm-nginx spec:scaleTargetRef:apiVersion: apps/v1name: cm-nginxkind: DeploymentminReplicas: 1maxReplicas: 10metrics:- type: Resourceresource:name: cputargetAverageUtilization: 500 --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata:name: cm-nginxannotations:nginx.ingress.kubernetes.io/rewrite-target: / spec:ingressClassName: nginxrules:- host: www.lin.comhttp:paths:- path: /pathType: Prefixbackend:service:name: cm-nginxport:number: 80 [root@k8s-80 k8syaml]# kubectl apply -f cm-nginx.yaml [root@k8s-80 k8syaml]# kubectl get po# 進入容器會發現/etc/nginx/upstream里有配置清單里面寫的配置,文件夾沒有也會創建; 生成目錄

    po
    NAME READY STATUS RESTARTS AGE
    mysql-0 1/1 Running 0 8s
    web-5c49d99965-qsf7r 1/1 Running 0 22m

    ```shell # 驗證 [root@k8s-80 k8syaml]# kubectl apply -f mysql.yaml # 不再是顯示密碼,如圖

    [外鏈圖片轉存中…(img-XdOR2arB-1674898737089)]

    # 但是進入容器輸入env還是能看到 [root@k8s-80 k8syaml]# kubectl exec -it mysql-0 -- bash root@mysql-0:/# env# kubectl獲取詳細信息也能看到 [root@k8s-80 k8syaml]# kubectl describe cm conf Name: conf Namespace: default Labels: app=conf Annotations: <none>Data ==== MYSQL_ROOT_PASSWORD: ---- 123456BinaryData ====Events: <none>

    使用

    ConfigMap 創建成功了,那么我們應該怎么在 Pod 中來使用呢?我們說 ConfigMap 這些配置數據可以通過很多種方式在 Pod 里使用,主要有以下幾種方式:

    • 設置環境變量的值
    • 在容器里設置命令行參數
    • 在數據卷里面掛載配置文件

    掛載的方式使用 configma

    [root@k8s-80 ~]# cd /k8syaml/ [root@k8s-80 k8syaml]# cp nginx.yaml cm-nginx.yaml [root@k8s-80 k8syaml]# vi cm-nginx.yaml apiVersion: v1 kind: Service metadata:name: cm-nginx spec:#type: NodePortselector:app: cm-nginxports:- name: httpprotocol: TCPport: 80targetPort: 80#nodePort: 30005 --- kind: ConfigMap apiVersion: v1 metadata:name: cm-nginxlabels:app: cm-nginx data:key: valuenginx.conf: |-upstream web {server 192.168.75.110;}server {listen 80;server_name www.dyz.com;location / {proxy_pass http://web;index index.htm index.html;proxy_set_header Host $host;proxy_set_header X-Real-IP $remote_addr;}} --- apiVersion: apps/v1 kind: Deployment metadata:name: cm-nginxlabels:app: cm-nginx spec:replicas: 1selector:matchLabels:app: cm-nginxtemplate:metadata:labels:app: cm-nginxspec:affinity:podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution:- labelSelector: matchExpressions:- key : appoperator: Invalues:- cm-nginxtopologyKey: kubernetes.io/hostnamenodeSelector:node-role.kubernetes.io/bus: "true"containers:- name: cm-nginximage: nginx:1.20.1livenessProbe:exec:command:- /bin/sh- -c- nginx -tinitialDelaySeconds: 5timeoutSeconds: 1readinessProbe:tcpSocket:port: 80ports:- containerPort: 80resources:limits:cpu: 500mmemory: 1000Mirequests:cpu: 200mmemory: 300MivolumeMounts:- mountPath: /etc/nginx/upstreamname: kgevolumes:- name: kgeconfigMap:name: cm-nginxitems:- key: "nginx.conf"path: "nginx.conf" --- apiVersion: autoscaling/v2beta1 #apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata:name: cm-nginx spec:scaleTargetRef:apiVersion: apps/v1name: cm-nginxkind: DeploymentminReplicas: 1maxReplicas: 10metrics:- type: Resourceresource:name: cputargetAverageUtilization: 500 --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata:name: cm-nginxannotations:nginx.ingress.kubernetes.io/rewrite-target: / spec:ingressClassName: nginxrules:- host: www.lin.comhttp:paths:- path: /pathType: Prefixbackend:service:name: cm-nginxport:number: 80 [root@k8s-80 k8syaml]# kubectl apply -f cm-nginx.yaml [root@k8s-80 k8syaml]# kubectl get po# 進入容器會發現/etc/nginx/upstream里有配置清單里面寫的配置,文件夾沒有也會創建; 生成目錄

    總結

    以上是生活随笔為你收集整理的4.kubernetes集群搭建的全部內容,希望文章能夠幫你解決所遇到的問題。

    如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。