日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

Kubernetes(k8s)高可用简介与安装

發布時間:2023/12/20 编程问答 29 豆豆
生活随笔 收集整理的這篇文章主要介紹了 Kubernetes(k8s)高可用简介与安装 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

一、簡介

Kubernetes是Google 2014年創建管理的,是Google 10多年大規模容器管理技術Borg的開源版本。它是容器集群管理系統,是一個開源的,用于管理云平臺中多個主機上的容器化的應用,Kubernetes的目標是讓部署容器化的應用簡單并且高效(powerful),Kubernetes提供了應用部署,規劃,更新,維護的一種機制

Kubernetes一個核心的特點就是能夠自主的管理容器來保證云平臺中的容器按照用戶的期望狀態運行著(比如用戶想讓apache一直運行,用戶不需要關心怎么去做,Kubernetes會自動去監控,然后去重啟,新建,總之,讓apache一直提供服務),管理員可以加載一個微型服務,讓規劃器來找到合適的位置,同時,Kubernetes也系統提升工具以及人性化方面,讓用戶能夠方便的部署自己的應用(就像canary deployments)

現在Kubernetes著重于不間斷的服務狀態(比如web服務器或者緩存服務器)和原生云平臺應用(Nosql),在不久的將來會支持各種生產云平臺中的各種服務,例如,分批,工作流,以及傳統數據庫

?Kubernetes作用:快速部署應用、快速擴展應用、無縫對接新的應用功能、節省資源,優化硬件資源的使用
Kubernetes 特點:
可移植:支持公有云,私有云,混合云,多重云(multi-cloud)
可擴展:模塊化, 插件化, 可掛載, 可組合
自動化:自動部署,自動重啟,自動復制,自動伸縮/擴展

Kubernetes架構

Kubernetes集群包含有節點代理kubelet和Master組件(APIs, scheduler, etc),一切都基于分布式的存儲系統。下面這張圖是Kubernetes的架構圖

?Kubernetes節點

在這張系統架構圖中,我們把服務分為運行在工作節點上的服務和組成集群級別控制板的服務
Kubernetes節點有運行應用容器必備的服務,而這些都是受Master的控制
每次個節點上當然都要運行Docker。Docker來負責所有具體的映像下載和容器運行

Kubernetes主要由以下幾個核心組件組成:

etcd:保存了整個集群的狀態;
apiserver:提供了資源操作的唯一入口,并提供認證、授權、訪問控制、API注冊和發現等機制;
controller manager:負責維護集群的狀態,比如故障檢測、自動擴展、滾動更新等;
scheduler:負責資源的調度,按照預定的調度策略將Pod調度到相應的機器上;
kubelet:負責維護容器的生命周期,同時也負責Volume(CVI)和網絡(CNI)的管理;
Container runtime:負責鏡像管理以及Pod和容器的真正運行(CRI);
kube-proxy:負責為Service提供cluster內部的服務發現和負載均衡;

除了核心組件,還有一些推薦的Add-ons:

kube-dns:負責為整個集群提供DNS服務
Ingress Controller:為服務提供外網入口
Heapster:提供資源監控
Dashboard:提供GUI
Federation:提供跨可用區的集群
Fluentd-elasticsearch:提供集群日志采集、存儲與查詢

?分層架構

Kubernetes設計理念和功能其實就是一個類似Linux的分層架構,如下圖所示

?核心層:Kubernetes最核心的功能,對外提供API構建高層的應用,對內提供插件式應用執行環境
應用層:部署(無狀態應用、有狀態應用、批處理任務、集群應用等)和路由(服務發現、DNS解析等)
管理層:系統度量(如基礎設施、容器和網絡的度量),自動化(如自動擴展、動態Provision等)以及策略管理(RBAC、Quota、PSP、NetworkPolicy等)
接口層:kubectl命令行工具、客戶端SDK以及集群聯邦
生態系統:在接口層之上的龐大容器集群管理調度的生態系統,可以劃分為兩個范疇
? ? ? ? Kubernetes外部:日志、監控、配置管理、CI、CD、Workflow、FaaS、OTS應用、ChatOps等
? ? ? ? Kubernetes內部:CRI、CNI、CVI、鏡像倉庫、Cloud Provider、集群自身的配置和管理等

kubelet

kubelet負責管理pods和它們上面的容器,images鏡像、volumes、etc。

kube-proxy

每一個節點也運行一個簡單的網絡代理和負載均衡(詳見services FAQ )(PS:官方 英文)。 正如Kubernetes API里面定義的這些服務(詳見the services doc)(PS:官方 英文)也可以在各種終端中以輪詢的方式做一些簡單的TCPUDP傳輸。
服務端點目前是通過DNS或者環境變量( Docker-links-compatibleKubernetes{FOO}_SERVICE_HOST 及 {FOO}_SERVICE_PORT 變量都支持)。這些變量由服務代理所管理的端口來解析。

Kubernetes控制面板

Kubernetes控制面板可以分為多個部分。目前它們都運行在一個master 節點,然而為了達到高可用性,這需要改變。不同部分一起協作提供一個統一的關于集群的視圖。

etcd

所有master的持續狀態都存在etcd的一個實例中。這可以很好地存儲配置數據。因為有watch(觀察者)的支持,各部件協調中的改變可以很快被察覺。

Kubernetes API Server

API服務提供Kubernetes API (PS:官方 英文)的服務。這個服務試圖通過把所有或者大部分的業務邏輯放到不兩只的部件中從而使其具有CRUD特性。它主要處理REST操作,在etcd中驗證更新這些對象(并最終存儲)。

Scheduler

調度器把未調度的pod通過binding api綁定到節點上。調度器是可插拔的,并且我們期待支持多集群的調度,未來甚至希望可以支持用戶自定義的調度器。

Kubernetes控制管理服務器

所有其它的集群級別的功能目前都是由控制管理器所負責。例如,端點對象是被端點控制器來創建和更新。這些最終可以被分隔成不同的部件來讓它們獨自的可插拔。

replicationcontroller(PS:官方 英文)是一種建立于簡單的 pod API之上的一種機制。一旦實現,我們最終計劃把這變成一種通用的插件機制

二、安裝Kubernetes

?Kubernetes的安裝方式

1、Kubeadm 安裝(官方建議、建議學習研究使用;以容器的方式運行)
2、二進制安裝(生產環境使用;以進程的方式運行)
3、Ansible安裝?

系統性能IP主機名
CentOS 7.4內存=4G;處理器=2

192.168.2.1

虛擬ip:192.168.2.5

k8s-master1
CentOS 7.4內存=4G;處理器=2

192.168.2.2

虛擬ip:192.168.2.5

k8s-master2
CentOS 7.4內存=2G;處理器=1192.168.2.3k8s-node1
CentOS 7.4內存=2G;處理器=1192.168.2.4k8s-node2

?1、環境配置

注意: 在所有服務器上添加網卡可以ping通外網

#在master1、2、node1、2上配置hosts文件 echo ' 192.168.2.1 k8s-master1 192.168.2.2 k8s-master2 192.168.2.3 k8s-node1 192.168.2.4 k8s-node2 '>> /etc/hosts

所有服務器關閉防火墻、selinux、dnsmasq、swap

systemctl disable --now firewalld systemctl disable --now NetworkManager #CentOS8無需關閉 所有服務器都運行此命令 [root@k8s-master1 ~]# swapoff -a && sysctl -w vm.swappiness=0 #關閉交換空間 vm.swappiness = 0 所有服務器都運行此命令 #不注釋可能會影響k8s的性能 [root@k8s-master1 ~]# sed -i '12 s/^/#/' /etc/fstab #注意每個人的行號可能不一樣 所有服務器都運行此命令[root@k8s-master1 ~]# yum -y install ntpdate #安裝同步時間命令[root@k8s-master1 ~]# ntpdate time2.aliyun.com #同步時間 所有服務器都運行此命令[root@k8s-master1 ~]# ulimit -SHn 65535 #設置進程打開文件數為65535 Master01節點免密鑰登錄其他節點,安裝過程中生成配置文件和證書均在Master01上操作,集群管理也在Master01上操作[root@k8s-master1 ~]# ssh-keygen -t rsa #生成密鑰(只在master1上操作)[root@k8s-master1 ~]# for i in k8s-master1 k8s-master2 k8s-node1 k8s-node2;do ssh-copy-id -i .ssh/id_rsa.pub $i;done ....... ... 全部服務器上執行[root@k8s-master1 ~]# yum -y install wget ....... .. [root@k8s-master1 ~]# wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo ............. ........ [root@k8s-master1 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2 ........ ... [root@k8s-master1 ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo 全部服務器上執行設置k8s的yum源[root@k8s-master1 ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF [root@k8s-master1 ~]# sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo [root@k8s-master1 ~]# yum makecache #建立緩存 .......... ... 全部服務器上執行[root@k8s-master1 ~]# yum -y install wget psmisc vim net-tools telnet #安裝常用命令

2、內核配置

所有節點安裝ipvsadm:(ipvs性能比iptables性能好)

全部服務器上執行[root@k8s-master1 ~]# yum -y install ipvsadm ipset sysstat conntrack libseccomp 所有節點配置ipvs模塊[root@k8s-master1 ~]# modprobe -- ip_vs [root@k8s-master1 ~]# modprobe -- ip_vs_rr [root@k8s-master1 ~]# modprobe -- ip_vs_wrr [root@k8s-master1 ~]# modprobe -- ip_vs_sh [root@k8s-master1 ~]# modprobe -- nf_conntrack_ipv4 ———————————————————————————————————— modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack_ipv4 全部服務器上執行[root@k8s-master1 ~]# vi /etc/modules-load.d/ipvs.conf #配置成開機自動加載ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack_ipv4 ip_tables ip_set xt_set ipt_set ipt_rpfilter ipt_REJECT ipip保存[root@k8s-master1 ~]# systemctl enable --now systemd-modules-load.service [root@k8s-master1 ~]# lsmod | grep -e ip_vs -e nf_conntrack_ipv4 #查看是否加載 nf_conntrack_ipv4 15053 0 nf_defrag_ipv4 12729 1 nf_conntrack_ipv4 ip_vs_sh 12688 0 ip_vs_wrr 12697 0 ip_vs_rr 12600 0 ip_vs 141092 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr nf_conntrack 133387 2 ip_vs,nf_conntrack_ipv4 libcrc32c 12644 3 xfs,ip_vs,nf_conntrack 所有節點配置k8s內核 直接復制以下配置[root@k8s-master1 ~]# cat <<EOF > /etc/sysctl.d/k8s.conf net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-iptables = 1 fs.may_detach_mounts = 1 vm.overcommit_memory=1 vm.panic_on_oom=0 fs.inotify.max_user_watches=89100 fs.file-max=52706963 fs.nr_open=52706963 net.netfilter.nf_conntrack_max=2310720 net.ipv4.tcp_keepalive_time = 600 net.ipv4.tcp_keepalive_probes = 3 net.ipv4.tcp_keepalive_intvl =15 net.ipv4.tcp_max_tw_buckets = 36000 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_max_orphans = 327680 net.ipv4.tcp_orphan_retries = 3 net.ipv4.tcp_syncookies = 1 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.ip_conntrack_max = 65536 net.ipv4.tcp_max_syn_backlog = 16384 net.ipv4.tcp_timestamps = 0 net.core.somaxconn = 16384 EOF [root@k8s-master1 ~]# sysctl --system sysctl -p

3、組件安裝

全部服務器上安裝最新版本的Docker k8s管理的是pod,在pod里面有一個或多個容器[root@k8s-master1 ~]# yum -y install docker-ce .......... ....[root@k8s-master1 ~]# yum list kubeadm.x86_64 --showduplicates | sort -r #查看k8s版本 ......... ... . [root@k8s-master1 ~]# yum install -y kubeadm-1.19.3-0.x86_64 kubectl-1.19.3-0.x86_64 kubelet-1.19.3-0.x86_64 .......... ..... ..kubeadm: 用來初始化集群的指令 kubelet: 在集群中的每個節點上用來啟動 pod 和 container 等 kubectl: 用來與集群通信的命令行工具 所有節點設置開機自啟動Docker[root@k8s-master1 ~]# systemctl enable --now docker

默認配置的pause鏡像使用gcr.io倉庫,國內可能無法訪問,所以這里配置Kubelet使用阿里云的pause鏡像,pause鏡像啟動成容器可以處理容器的僵尸進程

全部服務器上執行[root@k8s-master1 ~]# DOCKER_CGROUPS=$(docker info | grep 'Cgroup' | cut -d' ' -f3) cat >/etc/sysconfig/kubelet<<EOF KUBELET_EXTRA_ARGS="--cgroup-driver=$DOCKER_CGROUPS --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1" EOF 所有服務器設置Kubelet開機自啟動systemctl daemon-reload systemctl enable --now kubelet

4、高可用組件安裝

注意:是在所有Master節點通過yum安裝HAProxyKeepAlived

[root@k8s-master1 ~]# yum -y install keepalived haproxy

所有Master節點配置HAProxy;所有Master節點的HAProxy配置相同

[root@k8s-master1 ~]# vim /etc/haproxy/haproxy.cfg globalmaxconn 2000ulimit-n 16384log 127.0.0.1 local0 errstats timeout 30sdefaultslog globalmode httpoption httplogtimeout connect 5000timeout client 50000timeout server 50000timeout http-request 15stimeout http-keep-alive 15sfrontend monitor-inbind *:33305mode httpoption httplogmonitor-uri /monitorlisten statsbind *:8006mode httpstats enablestats hide-versionstats uri /statsstats refresh 30sstats realm Haproxy\ Statisticsstats auth admin:adminfrontend k8s-masterbind 0.0.0.0:16443bind 127.0.0.1:16443mode tcpoption tcplogtcp-request inspect-delay 5sdefault_backend k8s-masterbackend k8s-mastermode tcpoption tcplogoption tcp-checkbalance roundrobindefault-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100server k8s-master1 192.168.2.1:6443 check #修改ipserver k8s-master2 192.168.2.2:6443 check #修改ip保存

單配置Master1節點上的keepalived

[root@k8s-master1 ~]# vim /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs {router_id LVS_DEVEL } vrrp_script chk_apiserver {script "/etc/keepalived/check_apiserver.sh"interval 2weight -5fall 3 rise 2 } vrrp_instance VI_1 {state MASTERinterface ens33 #修改網卡名稱mcast_src_ip 192.168.2.1 #修改ip地址virtual_router_id 51priority 100advert_int 2authentication {auth_type PASSauth_pass K8SHA_KA_AUTH}virtual_ipaddress {192.168.2.5 #設置虛擬IP} # track_script { #健康檢查是關閉的,集群建立完成后再開啟 # chk_apiserver # } }保存

單配置Master2節點上的keepalived

[root@k8s-master2 ~]# vim /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs {router_id LVS_DEVEL } vrrp_script chk_apiserver {script "/etc/keepalived/check_apiserver.sh"interval 2weight -5fall 3 rise 2 } vrrp_instance VI_1 {state BACKUPinterface ens33 #修改網卡名稱mcast_src_ip 192.168.2.2 #修改ipvirtual_router_id 51priority 99advert_int 2authentication {auth_type PASSauth_pass K8SHA_KA_AUTH}virtual_ipaddress {192.168.2.5 #修改虛擬ip} # track_script { #健康檢查是關閉的,集群建立完成后再開啟 # chk_apiserver # } }保存

在所有Master節點上配置KeepAlived健康檢查文件

[root@k8s-master1 ~]# vim /etc/keepalived/check_apiserver.sh #!/bin/basherr=0 for k in $(seq 1 5) docheck_code=$(pgrep kube-apiserver)if [[ $check_code == "" ]]; thenerr=$(expr $err + 1)sleep 5continueelseerr=0breakfi doneif [[ $err != "0" ]]; thenecho "systemctl stop keepalived"/usr/bin/systemctl stop keepalivedexit 1 elseexit 0 fi保存[root@k8s-master1 ~]# chmod a+x /etc/keepalived/check_apiserver.sh

在master上啟動haproxy和keepalived

[root@k8s-master1 ~]# systemctl enable --now haproxy Created symlink from /etc/systemd/system/multi-user.target.wants/haproxy.service to /usr/lib/systemd/system/haproxy.service. [root@k8s-master1 ~]# systemctl enable --now keepalived Created symlink from /etc/systemd/system/multi-user.target.wants/keepalived.service to /usr/lib/systemd/system/keepalived.service.

查看是否有192.168.2.5

[root@k8s-master1 ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000link/ether 00:0c:29:7b:b0:46 brd ff:ff:ff:ff:ff:ffinet 192.168.2.1/24 brd 192.168.2.255 scope global ens33valid_lft forever preferred_lft foreverinet 192.168.2.5/32 scope global ens33 #虛擬IPvalid_lft forever preferred_lft foreverinet6 fe80::238:ba2d:81b5:920e/64 scope link valid_lft forever preferred_lft forever ...... ...[root@k8s-master1 ~]# netstat -anptu |grep 16443 tcp 0 0 127.0.0.1:16443 0.0.0.0:* LISTEN 105533/haproxy tcp 0 0 0.0.0.0:16443 0.0.0.0:* LISTEN 105533/haproxy ———————————————————————————————————————————————————— [root@k8s-master2 ~]# netstat -anptu |grep 16443 tcp 0 0 127.0.0.1:16443 0.0.0.0:* LISTEN 96274/haproxy tcp 0 0 0.0.0.0:16443 0.0.0.0:* LISTEN 96274/haproxy

查看master所需鏡像

[root@k8s-master1 ~]# kubeadm config images list I0306 14:28:17.418780 104960 version.go:252] remote version is much newer: v1.23.4; falling back to: stable-1.18 W0306 14:28:19.249961 104960 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] k8s.gcr.io/kube-apiserver:v1.18.20 k8s.gcr.io/kube-controller-manager:v1.18.20 k8s.gcr.io/kube-scheduler:v1.18.20 k8s.gcr.io/kube-proxy:v1.18.20 k8s.gcr.io/pause:3.2 k8s.gcr.io/etcd:3.4.3-0 k8s.gcr.io/coredns:1.6.7

?以下操作在所有master上操作

在所有master上進行腳本下載[root@k8s-master1 ~]# vim alik8simages.sh #!/bin/bash list='kube-apiserver:v1.19.3 kube-controller-manager:v1.19.3 kube-scheduler:v1.19.3 kube-proxy:v1.19.3 pause:3.2 etcd:3.4.13-0 coredns:1.7.0' for item in ${list}dodocker pull registry.aliyuncs.com/google_containers/$item && docker tag registry.aliyuncs.com/google_containers/$item k8s.gcr.io/$item && docker rmi registry.aliyuncs.com/google_containers/$item && docker pull jmgao1983/flanneldone 保存[root@k8s-master1 ~]# bash alik8simages.sh #執行腳本進行下載 ———————————————————————————————————————————— #上面過程 #docker pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.19.5 #docker tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.19.5 k8s.gcr.io/kube-apiserver:v1.19.5 #docker rmi registry.aliyuncs.com/google_containers/kube-apiserver:v1.19.5

以下操作在所有node上操作

[root@k8s-node1 ~]# vim alik8simages.sh #!/bin/bash list='kube-proxy:v1.19.3 pause:3.2' for item in ${list}dodocker pull registry.aliyuncs.com/google_containers/$item && docker tag registry.aliyuncs.com/google_containers/$item k8s.gcr.io/$item && docker rmi registry.aliyuncs.com/google_containers/$item && docker pull jmgao1983/flanneldone 保存[root@k8s-node1 ~]# bash alik8simages.sh

所有節點設置開機自啟動kubelet

[root@k8s-master1 ~]# systemctl enable --now kubelet

Master1節點初始化,初始化以后會在/etc/kubernetes目錄下生成對應的證書和配置文件,之后其他Master節點加入Master1即可

--kubernetes-version=v1.19.3 #k8s的版本 --apiserver-advertise-address=192.168.2.1 #地址寫master1的 --pod-network-cidr=10.244.0.0/16 #pod 指定網段(容器的網段) ————————————————————————————執行下面命令 [root@k8s-master1 ~]# kubeadm init --kubernetes-version=v1.19.3 --apiserver-advertise-address=192.168.2.1 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.1.0.0/16 ...... ... ..mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config ........ ..... 以下是加入群集所需要的命令 kubeadm join 192.168.2.1:6443 --token ona3p0.flcw3tmfl3fsfn5r \--discovery-token-ca-cert-hash sha256:8c74d27c94b5c6a1f2c226e93e605762df708b44129145791608e959d30aa36f ————————————————————————————————執行提示命令[root@k8s-master1 ~]# mkdir -p $HOME/.kube [root@k8s-master1 ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config [root@k8s-master1 ~]# chown $(id -u):$(id -g) $HOME/.kube/config

所有Master節點配置環境變量,用于訪問Kubernetes集群

cat <<EOF >> /root/.bashrc export KUBECONFIG=/etc/kubernetes/admin.conf EOF source /root/.bashrc

?查看節點狀態

[root@k8s-master1 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master1 NotReady master 7m7s v1.19.3 全部機器執行以寫命令iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

將其他節點加入群集(node1、2內容相同直接復制即可)

使用上面提示的命令加入群集 [root@k8s-master2 ~]# kubeadm join 192.168.2.1:6443 --token ona3p0.flcw3tmfl3fsfn5r \ > --discovery-token-ca-cert-hash sha256:8c74d27c94b5c6a1f2c226e93e605762df708b44129145791608e959d30aa36f .............. .....Run 'kubectl get nodes' on the control-plane to see this node join the cluster. [root@k8s-master1 ~]# kubectl get nodes #加入群集后查看 NAME STATUS ROLES AGE VERSION k8s-master1 NotReady master 18m v1.19.3 k8s-master2 NotReady <none> 2m19s v1.19.3 k8s-node1 NotReady <none> 2m16s v1.19.3 k8s-node2 NotReady <none> 2m15s v1.19.3 只在master1上運行此命令,作用:允許master部署應用因我們環境配置有限才執行此命令,如果在生產環境在可以不用執行[root@k8s-master1 ~]# kubectl taint nodes --all node-role.kubernetes.io/master- node/k8s-master1 untainted taint "node-role.kubernetes.io/master" not found taint "node-role.kubernetes.io/master" not found taint "node-role.kubernetes.io/master" not found[root@k8s-master1 ~]# kubectl describe nodes k8s-master1 | grep -E '(Roles|Taints)' Roles: master Taints: <none>————————————————————————————————————————————————————————————————-傳輸給master2 [root@k8s-master1 ~]# scp /etc/kubernetes/admin.conf root@192.168.2.2:/etc/kubernetes/admin.conf admin.conf 100% 5567 4.6MB/s 00:00 [root@k8s-master2 ~]# kubectl describe nodes k8s-master2 | grep -E '(Roles|Taints)' Roles: <none> Taints: <none>

?flannel組件配置

[root@k8s-master1 ~]# vim flannel.yml --- apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata:name: psp.flannel.unprivilegedannotations:seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/defaultseccomp.security.alpha.kubernetes.io/defaultProfileName: docker/defaultapparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/defaultapparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default spec:privileged: falsevolumes:- configMap- secret- emptyDir- hostPathallowedHostPaths:- pathPrefix: "/etc/cni/net.d"- pathPrefix: "/etc/kube-flannel"- pathPrefix: "/run/flannel"readOnlyRootFilesystem: false# Users and groupsrunAsUser:rule: RunAsAnysupplementalGroups:rule: RunAsAnyfsGroup:rule: RunAsAny# Privilege EscalationallowPrivilegeEscalation: falsedefaultAllowPrivilegeEscalation: false# CapabilitiesallowedCapabilities: ['NET_ADMIN', 'NET_RAW']defaultAddCapabilities: []requiredDropCapabilities: []# Host namespaceshostPID: falsehostIPC: falsehostNetwork: truehostPorts:- min: 0max: 65535# SELinuxseLinux:# SELinux is unused in CaaSPrule: 'RunAsAny' --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata:name: flannel rules: - apiGroups: ['extensions']resources: ['podsecuritypolicies']verbs: ['use']resourceNames: ['psp.flannel.unprivileged'] - apiGroups:- ""resources:- podsverbs:- get - apiGroups:- ""resources:- nodesverbs:- list- watch - apiGroups:- ""resources:- nodes/statusverbs:- patch --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata:name: flannel roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: flannel subjects: - kind: ServiceAccountname: flannelnamespace: kube-system --- apiVersion: v1 kind: ServiceAccount metadata:name: flannelnamespace: kube-system --- kind: ConfigMap apiVersion: v1 metadata:name: kube-flannel-cfgnamespace: kube-systemlabels:tier: nodeapp: flannel data:cni-conf.json: |{"name": "cbr0","cniVersion": "0.3.1","plugins": [{"type": "flannel","delegate": {"hairpinMode": true,"isDefaultGateway": true}},{"type": "portmap","capabilities": {"portMappings": true}}]}net-conf.json: |{"Network": "10.244.0.0/16","Backend": {"Type": "vxlan"}} --- apiVersion: apps/v1 kind: DaemonSet metadata:name: kube-flannel-dsnamespace: kube-systemlabels:tier: nodeapp: flannel spec:selector:matchLabels:app: flanneltemplate:metadata:labels:tier: nodeapp: flannelspec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: kubernetes.io/osoperator: Invalues:- linuxhostNetwork: truepriorityClassName: system-node-criticaltolerations:- operator: Existseffect: NoScheduleserviceAccountName: flannelinitContainers:- name: install-cniimage: jmgao1983/flannel:latestcommand:- cpargs:- -f- /etc/kube-flannel/cni-conf.json- /etc/cni/net.d/10-flannel.conflistvolumeMounts:- name: cnimountPath: /etc/cni/net.d- name: flannel-cfgmountPath: /etc/kube-flannel/containers:- name: kube-flannelimage: jmgao1983/flannel:latestcommand:- /opt/bin/flanneldargs:- --ip-masq- --kube-subnet-mgrresources:requests:cpu: "100m"memory: "50Mi"limits:cpu: "100m"memory: "50Mi"securityContext:privileged: falsecapabilities:add: ["NET_ADMIN", "NET_RAW"]env:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespacevolumeMounts:- name: runmountPath: /run/flannel- name: flannel-cfgmountPath: /etc/kube-flannel/volumes:- name: runhostPath:path: /run/flannel- name: cnihostPath:path: /etc/cni/net.d- name: flannel-cfgconfigMap:name: kube-flannel-cfg保存 確保版本一樣,如果不一樣,修改即可

?執行flannel.yml

[root@k8s-master1 ~]# kubectl apply -f flannel.yml podsecuritypolicy.policy/psp.flannel.unprivileged created clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.apps/kube-flannel-ds created

?成后觀察Master上運行的pod

[root@k8s-master1 ~]# kubectl get -A pods -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system coredns-f9fd979d6-cnrvb 1/1 Running 0 6m32s 10.244.0.3 k8s-master1 <none> <none> kube-system coredns-f9fd979d6-fxsdt 1/1 Running 0 6m32s 10.244.0.2 k8s-master1 <none> <none> kube-system etcd-k8s-master1 1/1 Running 0 6m43s 192.168.2.1 k8s-master1 <none> <none> kube-system kube-apiserver-k8s-master1 1/1 Running 0 6m43s 192.168.2.1 k8s-master1 <none> <none> kube-system kube-controller-manager-k8s-master1 1/1 Running 0 6m43s 192.168.2.1 k8s-master1 <none> <none> kube-system kube-flannel-ds-7rt9n 1/1 Running 0 52s 192.168.2.3 k8s-node1 <none> <none> kube-system kube-flannel-ds-brktl 1/1 Running 0 52s 192.168.2.1 k8s-master1 <none> <none> kube-system kube-flannel-ds-kj9hg 1/1 Running 0 52s 192.168.2.4 k8s-node2 <none> <none> kube-system kube-flannel-ds-ld7xj 1/1 Running 0 52s 192.168.2.2 k8s-master2 <none> <none> kube-system kube-proxy-4wbh9 1/1 Running 0 3m27s 192.168.2.2 k8s-master2 <none> <none> kube-system kube-proxy-crfmv 1/1 Running 0 3m24s 192.168.2.3 k8s-node1 <none> <none> kube-system kube-proxy-twttg 1/1 Running 0 6m32s 192.168.2.1 k8s-master1 <none> <none> kube-system kube-proxy-xdg6r 1/1 Running 0 3m24s 192.168.2.4 k8s-node2 <none> <none> kube-system kube-scheduler-k8s-master1 1/1 Running 0 6m42s 192.168.2.1 k8s-master1 <none> <none>

注意:如果上面查看發現狀態不是Running 可以重新初始化Kubernetes集群

?注意:先查看版本是否一致如果不一致就算重新初始化有沒有用

[root@k8s-master1 ~]# rm -rf /etc/kubernetes/* [root@k8s-master1 ~]# rm -rf ~/.kube/* [root@k8s-master1 ~]# rm -rf /var/lib/etcd/* [root@k8s-master1 ~]# kubeadm reset -f rm -rf /etc/kubernetes/* rm -rf ~/.kube/* rm -rf /var/lib/etcd/* kubeadm reset -f 同上面的初始化[root@k8s-master1 ~]# kubeadm init --kubernetes-version=v1.19.3 --apiserver-advertise-address=192.168.2.1 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.1.0.0/16 ............. ....... .. [root@k8s-master1 ~]# mkdir -p $HOME/.kube [root@k8s-master1 ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config [root@k8s-master1 ~]# chown $(id -u):$(id -g) $HOME/.kube/config 加入群集 [root@k8s-master2 ~]# kubeadm join 192.168.2.1:6443 --token aqywqm.fcddl4o1sy2q1qgj \ > --discovery-token-ca-cert-hash sha256:4d3b60e0801e9c307ae6d94507e1fac514a493e277c715dc873eeadb950e5215 ......... ...[root@k8s-master1 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master1 Ready master 4m13s v1.19.3 k8s-master2 Ready <none> 48s v1.19.3 k8s-node1 Ready <none> 45s v1.19.3 k8s-node2 Ready <none> 45s v1.19.3[root@k8s-master1 ~]# kubectl taint nodes --all node-role.kubernetes.io/master- node/k8s-master1 untainted taint "node-role.kubernetes.io/master" not found taint "node-role.kubernetes.io/master" not found taint "node-role.kubernetes.io/master" not found[root@k8s-master1 ~]# kubectl describe nodes k8s-master1 | grep -E '(Roles|Taints)' Roles: master Taints: <none>[root@k8s-master1 ~]# kubectl apply -f flannel.yml ........... ..... .
[root@k8s-master1 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master1 Ready master 43m v1.19.3 k8s-master2 Ready <none> 40m v1.19.3 k8s-node1 Ready <none> 40m v1.19.3 k8s-node2 Ready <none> 40m v1.19.3[root@k8s-master1 ~]# systemctl enable haproxy [root@k8s-master1 ~]# systemctl enable keepalived ———————————————————————————————————————————————————— [root@k8s-master2 ~]# systemctl enable haproxy [root@k8s-master2 ~]# systemctl enable keepalived

5、Metrics部署

在新版的Kubernetes中系統資源的采集均使用Metrics-server,可以通過Metrics采集節點和Pod的內存、磁盤、CPU和網絡的使用率

?Metrics介紹
Kubernetes的早期版本依靠Heapster來實現完整的性能數據采集和監控功能,Kubernetes從1.8版本開始,性能數據開始以Metrics API的方式提供標準化接口,并且從1.10版本開始將Heapster替換為Metrics Server。在Kubernetes新的監控體系中,Metrics Server用于提供核心指標(Core Metrics),包括Node、Pod的CPU和內存使用指標。
對其他自定義指標(Custom Metrics)的監控則由Prometheus等組件來完成

?下載下面所需要的進行以及腳本

上傳所需鏡像;在所有服務器上上傳?

傳輸給其他服務器 [root@k8s-master1 ~]# scp metrics* root@192.168.2.2:$PWD metrics-scraper_v1.0.1.tar 100% 38MB 76.5MB/s 00:00 metrics-server.tar.gz 100% 39MB 67.1MB/s 00:00 [root@k8s-master1 ~]# scp metrics* root@192.168.2.3:$PWD metrics-scraper_v1.0.1.tar 100% 38MB 56.9MB/s 00:00 metrics-server.tar.gz 100% 39MB 40.7MB/s 00:00 [root@k8s-master1 ~]# scp metrics* root@192.168.2.4:$PWD metrics-scraper_v1.0.1.tar 100% 38MB 61.8MB/s 00:00 metrics-server.tar.gz 100% 39MB 49.2MB/s 00:00 上傳鏡像 在所有服務器上執行 [root@k8s-master1 ~]# docker load -i metrics-scraper_v1.0.1.tar ......... .... [root@k8s-master1 ~]# docker load -i metrics-server.tar.gz ....... ...

上傳components.yaml文件

在master1 上操作[root@k8s-master1 ~]# kubectl apply -f components.yaml clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created Warning: apiregistration.k8s.io/v1beta1 APIService is deprecated in v1.19+, unavailable in v1.22+; use apiregistration.k8s.io/v1 APIService apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created serviceaccount/metrics-server created deployment.apps/metrics-server created service/metrics-server created clusterrole.rbac.authorization.k8s.io/system:metrics-server created clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created ———————————————————————————————————— 查看狀態 [root@k8s-master1 ~]# kubectl -n kube-system get pods -l k8s-app=metrics-server NAME READY STATUS RESTARTS AGE metrics-server-5c98b8989-54npg 1/1 Running 0 9m55s metrics-server-5c98b8989-9w9dd 1/1 Running 0 9m55s 查看資源監控 master1上操作 [root@k8s-master1 ~]# kubectl top nodes #節點占用情況 NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% k8s-master1 166m 8% 1388Mi 36% k8s-master2 61m 3% 728Mi 18% k8s-node1 50m 5% 889Mi 47% k8s-node2 49m 4% 878Mi 46% [root@k8s-master1 ~]# kubectl top pods -A #pods占用情況 NAMESPACE NAME CPU(cores) MEMORY(bytes) kube-system coredns-f9fd979d6-cnrvb 2m 14Mi kube-system coredns-f9fd979d6-fxsdt 2m 16Mi kube-system etcd-k8s-master1 11m 72Mi kube-system kube-apiserver-k8s-master1 27m 298Mi kube-system kube-controller-manager-k8s-master1 11m 48Mi kube-system kube-flannel-ds-7rt9n 1m 9Mi kube-system kube-flannel-ds-brktl 1m 14Mi kube-system kube-flannel-ds-kj9hg 2m 14Mi kube-system kube-flannel-ds-ld7xj 1m 15Mi kube-system kube-proxy-4wbh9 1m 19Mi kube-system kube-proxy-crfmv 1m 11Mi kube-system kube-proxy-twttg 1m 20Mi kube-system kube-proxy-xdg6r 1m 13Mi kube-system kube-scheduler-k8s-master1 2m 24Mi kube-system metrics-server-5c98b8989-54npg 1m 10Mi kube-system metrics-server-5c98b8989-9w9dd 1m 12Mi

6、Dashboard部署

Dashboard用于展示集群中的各類資源,同時也可以通過Dashboard實時查看Pod的日志和在容器中執行一些命令等

Dashboard 是基于網頁的 Kubernetes 用戶界面。您可以使用 Dashboard 將容器應用部署到 Kubernetes 集群中,也可以對容器應用排錯,還能管理集群資源。您可以使用 Dashboard 獲取運行在集群中的應用的概覽信息,也可以創建或者修改 Kubernetes 資源(如 Deployment,Job,DaemonSet 等等)。例如,您可以對 Deployment 實現彈性伸縮、發起滾動升級、重啟 Pod 或者使用向導創建新的應用?

上傳dashboard.yaml文件

master1上操作 [root@k8s-master1 ~]# vim dashboard.yaml ....... ...44 nodePort: 30001 #訪問端口 ...... ..保存 master1上操作 [root@k8s-master1 ~]# kubectl create -f dashboard.yaml namespace/kubernetes-dashboard created serviceaccount/kubernetes-dashboard created secret/kubernetes-dashboard-certs created secret/kubernetes-dashboard-csrf created secret/kubernetes-dashboard-key-holder created configmap/kubernetes-dashboard-settings created ..... ..

確認Dashboard 關聯pod和service的狀態,這里注意kubernetes-dashboard會去自動下載鏡像確保網絡是可以通信的

[root@k8s-master1 ~]# kubectl get pod,svc -n kubernetes-dashboard NAME READY STATUS RESTARTS AGE pod/dashboard-metrics-scraper-7445d59dfd-9jwcw 1/1 Running 0 36m pod/kubernetes-dashboard-7d8466d688-mgfq9 1/1 Running 0 36mNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/dashboard-metrics-scraper ClusterIP 10.1.70.163 <none> 8000/TCP 36m service/kubernetes-dashboard NodePort 10.1.158.233 <none> 443:30001/TCP 36m

創建serviceaccount和clusterrolebinding資源YAML文件

[root@k8s-master1 ~]# vim adminuser.yaml apiVersion: v1 kind: ServiceAccount metadata:name: admin-usernamespace: kubernetes-dashboard --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata:name: admin-user roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: cluster-admin subjects: - kind: ServiceAccountname: admin-usernamespace: kubernetes-dashboard保存 [root@k8s-master1 ~]# kubectl create -f adminuser.yaml serviceaccount/admin-user created clusterrolebinding.rbac.authorization.k8s.io/admin-user created

訪問測試:https://192.168.2.2:30001

獲取token,用于登錄Dashboard UI

[root@k8s-master1 ~]# kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}') #查看 Name: admin-user-token-rwzng Namespace: kubernetes-dashboard Labels: <none> Annotations: kubernetes.io/service-account.name: admin-userkubernetes.io/service-account.uid: 194f1012-cbed-4c15-b8c2-2142332174a9Type: kubernetes.io/service-account-tokenData ==== token: ————————————————復制以下秘鑰 eyJhbGciOiJSUzI1NiIsImtpZCI6Imxad29JeHUyYVFucGJuQzBDNm5qYU1NVDVDUUItU0NqWUxvQTdtWjcyYW8ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXJ3em5nIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIxOTRmMTAxMi1jYmVkLTRjMTUtYjhjMi0yMTQyMzMyMTc0YTkiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.nDvv2CevmtTBtpHqikXp5nRbzmJaMr13OSU5YCoBtMrOg1V6bMSn6Ctu5IdtxGExmDGY-69v4fBw7-DvJtLTon_rgsow6NA1LwUOuebMh8TwVrHSV0SW7yI0MCRFSMctC9NxIxyacxIDkDQ7eA7Rr9sQRKFpIWfjBgsC-k7z13IIuaAROXFrZKdqUUPd5hNTLhtFqtXOs7b_nMxzQTln9rSDIHozMTHbRMkL_oLm7McEGfrod7bO6RsTyPfn0TcK6pFCx5T9YA6AfoPMH3mNU0gsr-zbIYZyxIMr9FMpw2zpjP53BnaVhTQJ1M_c_Ptd774cRPk6vTWRPprul2U_OQ

將Kube-proxy改為ipvs模式,因為在初始化集群的時候注釋了ipvs配置,所以需要自行修改一下

[root@k8s-master1 ~]# curl 127.0.0.1:10249/proxyMode iptables [root@k8s-master1 ~]# kubectl edit cm kube-proxy -n kube-system ......44 mode: "ipvs" .... ..保存更新Kube-Proxy的Pod [root@k8s-master1 ~]# kubectl patch daemonset kube-proxy -p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}" -n kube-system daemonset.apps/kube-proxy patched[root@k8s-master1 ~]# curl 127.0.0.1:10249/proxyMode ipvs

?注意:如果要做虛擬機快照,啟動后會發現vip消失,30001端口無法訪問,解決方法:重啟全部master,作用恢復vip,然后,重新初始化,在執行相對于的yaml文件

總結

以上是生活随笔為你收集整理的Kubernetes(k8s)高可用简介与安装的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。