日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

kubeadm 安装 k8s 1.14.1版本(HA)

發(fā)布時間:2024/3/12 编程问答 40 豆豆
生活随笔 收集整理的這篇文章主要介紹了 kubeadm 安装 k8s 1.14.1版本(HA) 小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.

參考官網(wǎng):

https://kubernetes.io/docs/setup/independent/install-kubeadm/#verify-the-mac-address-and-product-uuid-are-unique-for-every-node

kubeadm init 配置文件參數(shù)參考:

https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/
  • 環(huán)境:

5臺 centos7 最新的系統(tǒng) etc 集群跑在3臺 master 節(jié)點上 網(wǎng)絡(luò)組件使用 calico 主機名ip說明組件
k8s-company01-master01 ~ 03172.16.4.201 ~ 2033個 master 節(jié)點keepalived、haproxy、etcd、kubelet、kube-apiserver
k8s-company01-worker001 ~ 002172.16.4.204 ~ 2052個 worker 節(jié)點kubelet
k8-company01-lb172.16.4.200keepalived虛IP
  • 準(zhǔn)備(在所有節(jié)點上執(zhí)行):

1. 虛擬機確定 mac 和主機uuid 是唯一的。 (uuid 查看方法:cat /sys/class/dmi/id/product_uuid) 2. Swap disabled. (執(zhí)行命令:swapoff -a; sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab) 3. 習(xí)慣性關(guān)閉 selinux,設(shè)置時區(qū)timedatectl set-timezone Asia/Shanghai,可選:echo "Asia/Shanghai" > /etc/timezone 4. 更新時間(etcd 對時間一致性要求高)ntpdate asia.pool.ntp.org (寫入到 crontab:8 * * * * /usr/sbin/ntpdate asia.pool.ntp.org && /sbin/hwclock --systohc ) 4. yum update 到最新并重啟系統(tǒng)讓新的內(nèi)核生效。備注:關(guān)閉 selinux setenforce 0 sed -i --follow-symlinks "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config sed -i --follow-symlinks "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/selinux/config關(guān)閉 firewalld,如果不關(guān)閉,后面很多 k8s 以外的組件會網(wǎng)絡(luò)不通,一個個排查很麻煩,由于我們 k8s 在內(nèi)網(wǎng),就直接關(guān)閉了。 systemctl stop firewalld.service systemctl disable firewalld.service配置主機名(注意根據(jù)實際環(huán)境修改主機名): 5臺主機分別設(shè)置主機名: hostnamectl set-hostname k8s-company01-master01 hostnamectl set-hostname k8s-company01-master02 hostnamectl set-hostname k8s-company01-master03 hostnamectl set-hostname k8s-company01-worker001 hostnamectl set-hostname k8s-company01-worker002在5臺主機的/etc/hosts 中添加 cat >> /etc/hosts <<EOF 172.16.4.201 k8s-company01-master01.skymobi.cn k8s-company01-master01 172.16.4.202 k8s-company01-master02.skymobi.cn k8s-company01-master02 172.16.4.203 k8s-company01-master03.skymobi.cn k8s-company01-master03 172.16.4.200 k8s-company01-lb.skymobi.cn k8s-company01-lb 172.16.4.204 k8s-company01-worker001.skymobi.cn k8s-company01-worker001 172.16.4.205 k8s-company01-worker002.skymobi.cn k8s-company01-worker002 EOFyum install wget git jq psmisc vim net-tools tcping bash-completion -y yum update -y && reboot # 重啟不僅是是讓新升級的 kernel 生效,也讓調(diào)用到 hostname 的相關(guān)服務(wù)使用新的 hostname
  • 每臺安裝CRI(這里默認(rèn)使用 docker,k8s 1.12開始推薦使用 docker 18.06 版本,但由于18.06有個 root 提權(quán)的漏洞,這里我們使用最新的版本,18.09.5)

安裝參考:https://kubernetes.io/docs/setup/cri/

## Install prerequisites. yum install -y yum-utils device-mapper-persistent-data lvm2## Add docker repository. yum-config-manager \--add-repo \https://download.docker.com/linux/centos/docker-ce.repo## 查看 docker-ce 所有版本:yum list docker-ce --showduplicates | sort -r ## Install docker.if use 'yum install docker-ce' is install the latest.Here we use specify version: yum install -y docker-ce-18.09.5 docker-ce-cli-18.09.5# Setup daemon. mkdir /etc/docker cat > /etc/docker/daemon.json <<EOF {"exec-opts": ["native.cgroupdriver=systemd"],"log-driver": "json-file","log-opts": {"max-size": "100m"},"storage-driver": "overlay2","storage-opts": ["overlay2.override_kernel_check=true"] } EOFmkdir -p /etc/systemd/system/docker.service.d# Restart docker. systemctl daemon-reload systemctl enable docker.service systemctl restart docker
  • 固定 docker 版本,防止以后意外更新到另外的大版本:

yum -y install yum-plugin-versionlock yum versionlock docker-ce docker-ce-cli yum versionlock list# 注: # 解鎖 # yum versionlock delete docker-ce docker-ce-cli ## Some users on RHEL/CentOS 7 have reported issues with traffic being routed incorrectly due to iptables being bypassed. You should ensure net.bridge.bridge-nf-call-iptables is set to 1 in your sysctl config, e.g.cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 vm.swappiness=0 vm.overcommit_memory=1 vm.panic_on_oom=0 fs.may_detach_mounts = 1 fs.inotify.max_user_watches=89100 fs.file-max=52706963 fs.nr_open=52706963 net.netfilter.nf_conntrack_max=2310720 EOFmodprobe br_netfilter sysctl --systemcat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg exclude=kube* EOF同時安裝ipvsadm,后面kube-proxy會采用ipvs的方式(cri-tools-1.12.0 kubernetes-cni-0.7.5 是兩個關(guān)聯(lián)包) yum install -y kubelet-1.14.1 kubeadm-1.14.1 kubectl-1.14.1 cri-tools-1.12.0 kubernetes-cni-0.7.5 ipvsadm --disableexcludes=kubernetes# 加載 ipvs 相關(guān)內(nèi)核模塊 modprobe ip_vs modprobe ip_vs_rr modprobe ip_vs_wrr modprobe ip_vs_sh modprobe nf_conntrack_ipv4 modprobe br_netfilter# 加入開機啟動中 cat <<EOF >>/etc/rc.d/rc.local modprobe ip_vs modprobe ip_vs_rr modprobe ip_vs_wrr modprobe ip_vs_sh modprobe nf_conntrack_ipv4 modprobe br_netfilter EOF##默認(rèn) rc.local 軟鏈接源文件沒有可執(zhí)行權(quán)限需要加上可執(zhí)行權(quán)限 chmod +x /etc/rc.d/rc.locallsmod | grep ip_vs# 配置kubelet使用國內(nèi)pause鏡像 # 配置kubelet的cgroups # 獲取docker的cgroups DOCKER_CGROUPS=$(docker info | grep 'Cgroup' | cut -d' ' -f3) echo $DOCKER_CGROUPS cat > /etc/sysconfig/kubelet <<EOF KUBELET_EXTRA_ARGS="--cgroup-driver=$DOCKER_CGROUPS --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1" EOF# 開機啟動并 start now,之后 kubelet 啟動是失敗的,每隔幾秒鐘會自動重啟,這是在等待 kubeadm 告訴它要做什么。 systemctl enable --now kubelet# 添加 kubectl 參數(shù) tab 鍵自動補全功能 source /usr/share/bash-completion/bash_completion source <(kubectl completion bash) echo "source <(kubectl completion bash)" >> ~/.bashrc
  • 在三臺 master 上配置haproxy代理:
如下操作在三個master節(jié)點操作,采用16443端口代理k8s的6443端口,注意修改最后面的主機名和 ip,如server k8s-company01-master01 172.16.4.201:6443 # 拉取haproxy鏡像【采用alpine小鏡像版本】 docker pull reg01.sky-mobi.com/k8s/haproxy:1.9.1-alpine mkdir /etc/haproxy cat >/etc/haproxy/haproxy.cfg<<EOF globallog 127.0.0.1 local0 errmaxconn 30000uid 99gid 99#daemonnbproc 1pidfile haproxy.piddefaultsmode httplog 127.0.0.1 local0 errmaxconn 30000retries 3timeout connect 5stimeout client 30stimeout server 30stimeout check 2slisten admin_statsmode httpbind 0.0.0.0:1080log 127.0.0.1 local0 errstats refresh 30sstats uri /haproxy-statusstats realm Haproxy\ Statisticsstats auth admin:skymobik8sstats hide-versionstats admin if TRUEfrontend k8s-httpsbind 0.0.0.0:16443mode tcp#maxconn 30000default_backend k8s-httpsbackend k8s-httpsmode tcpbalance roundrobinserver k8s-company01-master01 172.16.4.201:6443 weight 1 maxconn 1000 check inter 2000 rise 2 fall 3server k8s-company01-master02 172.16.4.202:6443 weight 1 maxconn 1000 check inter 2000 rise 2 fall 3server k8s-company01-master03 172.16.4.203:6443 weight 1 maxconn 1000 check inter 2000 rise 2 fall 3 EOF# 啟動haproxy docker run -d --name k8s-haproxy \ -v /etc/haproxy:/usr/local/etc/haproxy:ro \ -p 16443:16443 \ -p 1080:1080 \ --restart always \ -d reg01.sky-mobi.com/k8s/haproxy:1.9.1-alpine# 查看是否啟動成功,如果查看日志有連接報錯,是正常的,因為 kube-api的6443端口還沒起來。 docker ps# 如果上述配置失敗后,需清理重新實驗 docker stop k8s-haproxy docker rm k8s-haproxy
  • 在三臺 master 上配置 keepalived
# 拉取keepalived鏡像 docker pull reg01.sky-mobi.com/k8s/keepalived:2.0.10# 啟動keepalived , 注意修改網(wǎng)卡名和 ip # eth0為本次實驗172.16.4.0/24網(wǎng)段的所在網(wǎng)卡(如果你的不是,請改成自己的網(wǎng)卡名稱,用法參考https://github.com/osixia/docker-keepalived/tree/v2.0.10) # 密碼不要超過8位,如果是skymobk8s,則發(fā)送的 vrrp 包中只有8前8位:addrs: k8s-master-lb auth "skymobik" # KEEPALIVED_PRIORITY Master節(jié)點設(shè)置為200 ,其他backup 上設(shè)置為150 docker run --net=host --cap-add=NET_ADMIN \ -e KEEPALIVED_ROUTER_ID=55 \ -e KEEPALIVED_INTERFACE=eth0 \ -e KEEPALIVED_VIRTUAL_IPS="#PYTHON2BASH:['172.16.4.200']" \ -e KEEPALIVED_UNICAST_PEERS="#PYTHON2BASH:['172.16.4.201','172.16.4.202','172.16.4.203']" \ -e KEEPALIVED_PASSWORD=skyk8stx \ -e KEEPALIVED_PRIORITY=150 \ --name k8s-keepalived \ --restart always \ -d reg01.sky-mobi.com/k8s/keepalived:2.0.10# 查看日志 # 會看到兩個成為backup 一個成為master docker logs k8s-keepalived # 如果日志中有 received an invalid passwd! 的信息,網(wǎng)絡(luò)中有配置相同的 ROUTER_ID,修改 ROUTER_ID 即可。# 從任意一臺 master 上ping測試 ping -c 4 虛IP# 如果上述配置失敗后,需清理重新實驗 docker stop k8s-keepalived docker rm k8s-keepalived
  • 高可用
    參考:
https://kubernetes.io/docs/setup/independent/high-availability/

第一臺:k8s-master01 上操作

# 注意修改 controlPlaneEndpoint: "k8s-company01-lb:16443" 中對應(yīng)的虛ip主機名 cat << EOF > kubeadm-config.yaml apiVersion: kubeadm.k8s.io/v1beta1 kind: ClusterConfiguration kubernetesVersion: v1.14.1 # add the available imageRepository in china imageRepository: reg01.sky-mobi.com/k8s/k8s.gcr.io controlPlaneEndpoint: "k8s-company01-lb:16443" networking:podSubnet: "10.254.0.0/16" --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration ipvs:minSyncPeriod: 1ssyncPeriod: 10s mode: ipvs EOF

kubeadm-config 參數(shù)參考:

https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-config/

預(yù)拉取鏡像:

kubeadm config images pull --config kubeadm-config.yaml

master01 初始化:

kubeadm init --config=kubeadm-config.yaml --experimental-upload-certs 注意剛開始的打印出的信息,根據(jù)提示,消除掉所有的 WARNING 如果想要重來,使用 kubeadm reset 命令,并且按照提示清空 iptables 和 ipvs 配置,然后重啟 docker 服務(wù)。

提示成功后,記錄下最后 join 的所有參數(shù),用于后面節(jié)點的加入(兩小時內(nèi)有效。一個用于 master 節(jié)點的加入,一個用于 worker 節(jié)點的加入)

You can now join any number of the control-plane node running the following command on each as root:kubeadm join k8s-company01-lb:16443 --token fp0x6g.cwuzedvtwlu1zg1f \--discovery-token-ca-cert-hash sha256:5d4095bc9e4e4b5300abe5a25afe1064f32c1ddcecc02a1f9b0aeee7710c3383 \--experimental-control-plane --certificate-key b56be86f65e73d844bb60783c7bd5d877fe20929296a3e254854d3b623bb86f7Please note that the certificate-key gives access to cluster sensitive data, keep it secret! As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use "kubeadm init phase upload-certs --experimental-upload-certs" to reload certs afterward.Then you can join any number of worker nodes by running the following on each as root:kubeadm join k8s-company01-lb:16443 --token fp0x6g.cwuzedvtwlu1zg1f \--discovery-token-ca-cert-hash sha256:5d4095bc9e4e4b5300abe5a25afe1064f32c1ddcecc02a1f9b0aeee7710c3383

記得執(zhí)行如下命令,以便使用 kubectl訪問集群

mkdir -p $HOME/.kubecp -i /etc/kubernetes/admin.conf $HOME/.kube/configchown $(id -u):$(id -g) $HOME/.kube/config# 如果不執(zhí)行,將會出現(xiàn)一下報錯: # [root@k8s-master01 ~]# kubectl -n kube-system get pod # The connection to the server localhost:8080 was refused - did you specify the right host or port?

查看集群狀態(tài)時,coredns pending 沒關(guān)系,因為網(wǎng)絡(luò)插件還沒裝

# 顯示結(jié)果作為參考 [root@k8s-master01 ~]# kubectl get pod -n kube-system NAME READY STATUS RESTARTS AGE coredns-56c9dc7946-5c5z2 0/1 Pending 0 34m coredns-56c9dc7946-thqwd 0/1 Pending 0 34m etcd-k8s-master01 1/1 Running 2 34m kube-apiserver-k8s-master01 1/1 Running 2 34m kube-controller-manager-k8s-master01 1/1 Running 1 33m kube-proxy-bl9c6 1/1 Running 2 34m kube-scheduler-k8s-master01 1/1 Running 1 34m
將 master02 和 master03 加入 cluster
# 使用之前生成的 join 參數(shù)將 master02 和 master03 加入集群(--experimental-control-plane 會自動加入服務(wù)集群kubeadm join k8s-company01-lb:16443 --token fp0x6g.cwuzedvtwlu1zg1f \--discovery-token-ca-cert-hash sha256:5d4095bc9e4e4b5300abe5a25afe1064f32c1ddcecc02a1f9b0aeee7710c3383 \--experimental-control-plane --certificate-key b56be86f65e73d844bb60783c7bd5d877fe20929296a3e254854d3b623bb86f7# 如果join 參數(shù)沒有記下來,或者已經(jīng)失效,參考: http://wiki.sky-mobi.com:8090/pages/viewpage.action?pageId=9079715# 成功加入后,添加下 kubectl 的訪問集群權(quán)限mkdir -p $HOME/.kubecp -i /etc/kubernetes/admin.conf $HOME/.kube/configchown $(id -u):$(id -g) $HOME/.kube/config

安裝 calico 網(wǎng)絡(luò)插件(在 master01 上操作)

參考: https://docs.projectcalico.org/v3.6/getting-started/kubernetes/installation/calico下載 yaml 文件( 這里的版本是 v3.6.1,文件源于官網(wǎng)https://docs.projectcalico.org/v3.6/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/typha/calico.yaml 修改過網(wǎng)段和replicas以及 image 地址)# 機房外部使用(有訪問限制,公司自己的公網(wǎng)地址) curl http://111.1.17.135/yum/scripts/k8s/calico_v3.6.1.yaml -O # 機房內(nèi)部使用 curl http://192.168.160.200/yum/scripts/k8s/calico_v3.6.1.yaml -O## 修改 yaml 文件,網(wǎng)絡(luò)地址段改成和kubeadm-config.yaml 中podSubnet 一致。 ## ## export POD_CIDR="10.254.0.0/16" ; sed -i -e "s?192.168.0.0/16?$POD_CIDR?g" calico.yaml ## replicas 改成3份,用于生產(chǎn)(默認(rèn)是1) ## 還修改過鏡像地址,鏡像放到了reg01.sky-mobi.com 上# 需要開啟允許pod 被調(diào)度到master 節(jié)點上(在master01 上執(zhí)行就行) [root@k8s-company01-master01 ~]# kubectl taint nodes --all node-role.kubernetes.io/master- node/k8s-company01-master01 untainted node/k8s-company01-master02 untainted node/k8s-company01-master03 untainted# 安裝 calico (卸載是kubectl delete -f calico_v3.6.1.yaml) [root@k8s-company01-master01 ~]# kubectl apply -f calico_v3.6.1.yaml configmap/calico-config created customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created clusterrole.rbac.authorization.k8s.io/calico-node created clusterrolebinding.rbac.authorization.k8s.io/calico-node created service/calico-typha created deployment.apps/calico-typha created poddisruptionbudget.policy/calico-typha created daemonset.extensions/calico-node created serviceaccount/calico-node created deployment.extensions/calico-kube-controllers created serviceaccount/calico-kube-controllers created# 至此,所有pod 運行正常 [root@k8s-company01-master01 ~]# kubectl -n kube-system get pod NAME READY STATUS RESTARTS AGE calico-kube-controllers-749f7c8df8-knlx4 0/1 Running 0 20s calico-kube-controllers-749f7c8df8-ndf55 0/1 Running 0 20s calico-kube-controllers-749f7c8df8-pqxlx 0/1 Running 0 20s calico-node-4txj7 0/1 Running 0 21s calico-node-9t2l9 0/1 Running 0 21s calico-node-rtxlj 0/1 Running 0 21s calico-typha-646cdc958c-7j948 0/1 Pending 0 21s coredns-56c9dc7946-944nt 0/1 Running 0 4m9s coredns-56c9dc7946-nh2sk 0/1 Running 0 4m9s etcd-k8s-company01-master01 1/1 Running 0 3m26s etcd-k8s-company01-master02 1/1 Running 0 2m52s etcd-k8s-company01-master03 1/1 Running 0 110s kube-apiserver-k8s-company01-master01 1/1 Running 0 3m23s kube-apiserver-k8s-company01-master02 1/1 Running 0 2m53s kube-apiserver-k8s-company01-master03 1/1 Running 1 111s kube-controller-manager-k8s-company01-master01 1/1 Running 1 3m28s kube-controller-manager-k8s-company01-master02 1/1 Running 0 2m52s kube-controller-manager-k8s-company01-master03 1/1 Running 0 56s kube-proxy-8wm4v 1/1 Running 0 4m9s kube-proxy-vvdrl 1/1 Running 0 2m53s kube-proxy-wnctx 1/1 Running 0 2m2s kube-scheduler-k8s-company01-master01 1/1 Running 1 3m18s kube-scheduler-k8s-company01-master02 1/1 Running 0 2m52s kube-scheduler-k8s-company01-master03 1/1 Running 0 55s# 所有master 節(jié)點都是 ready 狀態(tài) [root@k8s-company01-master01 ~]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-company01-master01 Ready master 4m48s v1.14.1 k8s-company01-master02 Ready master 3m12s v1.14.1 k8s-company01-master03 Ready master 2m21s v1.14.1# 遇到 coredns 不停重啟,關(guān)閉 firewalld 后正常,再次開啟 firewalld 也正常了...
  • 兩臺 worker 節(jié)點加入集群(按照前文做基礎(chǔ)配置,安裝好 docker 和 kubeadm 等)
# 與 master 加入集群的區(qū)別是少了 --experimental-control-plane 參數(shù) kubeadm join k8s-company01-lb:16443 --token fp0x6g.cwuzedvtwlu1zg1f \--discovery-token-ca-cert-hash sha256:5d4095bc9e4e4b5300abe5a25afe1064f32c1ddcecc02a1f9b0aeee7710c3383# 如果join 參數(shù)沒有記下來,或者已經(jīng)失效,參考 http://wiki.sky-mobi.com:8090/pages/viewpage.action?pageId=9079715# 添加成功顯示: This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the master to see this node join the cluster.### kubectl get nodes 命令在任意 master 節(jié)點執(zhí)行。 [root@k8s-company01-master01 ~]# kubectl get pod -n kube-system -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES calico-kube-controllers-749f7c8df8-knlx4 1/1 Running 1 5m2s 10.254.28.66 k8s-company01-master02 <none> <none> calico-kube-controllers-749f7c8df8-ndf55 1/1 Running 4 5m2s 10.254.31.67 k8s-company01-master03 <none> <none> calico-kube-controllers-749f7c8df8-pqxlx 1/1 Running 4 5m2s 10.254.31.66 k8s-company01-master03 <none> <none> calico-node-4txj7 1/1 Running 0 5m3s 172.16.4.203 k8s-company01-master03 <none> <none> calico-node-7fqwh 1/1 Running 0 68s 172.16.4.205 k8s-company01-worker002 <none> <none> calico-node-9t2l9 1/1 Running 0 5m3s 172.16.4.201 k8s-company01-master01 <none> <none> calico-node-rkfxj 1/1 Running 0 86s 172.16.4.204 k8s-company01-worker001 <none> <none> calico-node-rtxlj 1/1 Running 0 5m3s 172.16.4.202 k8s-company01-master02 <none> <none> calico-typha-646cdc958c-7j948 1/1 Running 0 5m3s 172.16.4.204 k8s-company01-worker001 <none> <none> coredns-56c9dc7946-944nt 0/1 CrashLoopBackOff 4 8m51s 10.254.28.65 k8s-company01-master02 <none> <none> coredns-56c9dc7946-nh2sk 0/1 CrashLoopBackOff 4 8m51s 10.254.31.65 k8s-company01-master03 <none> <none> etcd-k8s-company01-master01 1/1 Running 0 8m8s 172.16.4.201 k8s-company01-master01 <none> <none> etcd-k8s-company01-master02 1/1 Running 0 7m34s 172.16.4.202 k8s-company01-master02 <none> <none> etcd-k8s-company01-master03 1/1 Running 0 6m32s 172.16.4.203 k8s-company01-master03 <none> <none> kube-apiserver-k8s-company01-master01 1/1 Running 0 8m5s 172.16.4.201 k8s-company01-master01 <none> <none> kube-apiserver-k8s-company01-master02 1/1 Running 0 7m35s 172.16.4.202 k8s-company01-master02 <none> <none> kube-apiserver-k8s-company01-master03 1/1 Running 1 6m33s 172.16.4.203 k8s-company01-master03 <none> <none> kube-controller-manager-k8s-company01-master01 1/1 Running 1 8m10s 172.16.4.201 k8s-company01-master01 <none> <none> kube-controller-manager-k8s-company01-master02 1/1 Running 0 7m34s 172.16.4.202 k8s-company01-master02 <none> <none> kube-controller-manager-k8s-company01-master03 1/1 Running 0 5m38s 172.16.4.203 k8s-company01-master03 <none> <none> kube-proxy-8wm4v 1/1 Running 0 8m51s 172.16.4.201 k8s-company01-master01 <none> <none> kube-proxy-k8rng 1/1 Running 0 68s 172.16.4.205 k8s-company01-worker002 <none> <none> kube-proxy-rqnkv 1/1 Running 0 86s 172.16.4.204 k8s-company01-worker001 <none> <none> kube-proxy-vvdrl 1/1 Running 0 7m35s 172.16.4.202 k8s-company01-master02 <none> <none> kube-proxy-wnctx 1/1 Running 0 6m44s 172.16.4.203 k8s-company01-master03 <none> <none> kube-scheduler-k8s-company01-master01 1/1 Running 1 8m 172.16.4.201 k8s-company01-master01 <none> <none> kube-scheduler-k8s-company01-master02 1/1 Running 0 7m34s 172.16.4.202 k8s-company01-master02 <none> <none> kube-scheduler-k8s-company01-master03 1/1 Running 0 5m37s 172.16.4.203 k8s-company01-master03 <none> <none>[root@k8s-company01-master01 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-company01-master01 Ready master 9m51s v1.14.1 k8s-company01-master02 Ready master 8m15s v1.14.1 k8s-company01-master03 Ready master 7m24s v1.14.1 k8s-company01-worker001 Ready <none> 2m6s v1.14.1 k8s-company01-worker002 Ready <none> 108s v1.14.1[root@k8s-company01-master01 ~]# kubectl get csr NAME AGE REQUESTOR CONDITION csr-94f5v 8m27s system:bootstrap:fp0x6g Approved,Issued csr-g9tbg 2m19s system:bootstrap:fp0x6g Approved,Issued csr-pqr6l 7m49s system:bootstrap:fp0x6g Approved,Issued csr-vwtqq 2m system:bootstrap:fp0x6g Approved,Issued csr-w486d 10m system:node:k8s-company01-master01 Approved,Issued[root@k8s-company01-master01 ~]# kubectl get componentstatuses NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health":"true"}
  • 安裝 metrics-server 用于簡單的監(jiān)控,如命令 kubectl top nodes
# 不安裝的情況下: [root@k8s-master03 ~]# kubectl top nodes Error from server (NotFound): the server could not find the requested resource (get services http:heapster:)這里使用 helm 安裝: 安裝 helm(在 master01 上執(zhí)行):wget http://192.168.160.200/yum/scripts/k8s/helm-v2.13.1-linux-amd64.tar.gz 或 wget http://111.1.17.135/yum/scripts/k8s/helm-v2.13.1-linux-amd64.tar.gz tar xvzf helm-v2.13.1-linux-amd64.tar.gz mv linux-amd64/helm /usr/local/bin/helm # 驗證 helm help每個節(jié)點執(zhí)行 yum install -y socat使用微軟的源(阿里的源很長時間都沒更新了!) # helm init --client-only --stable-repo-url https://aliacs-app-catalog.oss-cn-hangzhou.aliyuncs.com/charts/ # helm repo add incubator https://aliacs-app-catalog.oss-cn-hangzhou.aliyuncs.com/charts-incubator/ helm init --client-only --stable-repo-url http://mirror.azure.cn/kubernetes/charts/ helm repo add incubator http://mirror.azure.cn/kubernetes/charts-incubator/ helm repo update# 在 Kubernetes 中安裝 Tiller 服務(wù),因為官方的鏡像因為某些原因無法拉取,使用-i指定自己的鏡像,可選鏡像:registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.9.1(阿里云),該鏡像的版本與helm客戶端的版本相同,使用helm version可查看helm客戶端版本。helm init --service-account tiller --upgrade -i registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.13.1 --tiller-tls-cert /etc/kubernetes/ssl/tiller001.pem --tiller-tls-key /etc/kubernetes/ssl/tiller001-key.pem --tls-ca-cert /etc/kubernetes/ssl/ca.pem --tiller-namespace kube-system --stable-repo-url http://mirror.azure.cn/kubernetes/charts/ --service-account tiller --history-max 200

給 Tiller 授權(quán)(master01 上執(zhí)行)

# 因為 Helm 的服務(wù)端 Tiller 是一個部署在 Kubernetes 中 Kube-System Namespace 下 的 Deployment,它會去連接 Kube-Api 在 Kubernetes 里創(chuàng)建和刪除應(yīng)用。# 而從 Kubernetes 1.6 版本開始,API Server 啟用了 RBAC 授權(quán)。目前的 Tiller 部署時默認(rèn)沒有定義授權(quán)的 ServiceAccount,這會導(dǎo)致訪問 API Server 時被拒絕。所以我們需要明確為 Tiller 部署添加授權(quán)。# 創(chuàng)建 Kubernetes 的服務(wù)帳號和綁定角色kubectl create serviceaccount --namespace kube-system tillerkubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tillerkubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'# 查看是否授權(quán)成功 [root@k8s-company01-master01 ~]# kubectl -n kube-system get pods|grep tiller tiller-deploy-7bf47568d4-42wf5 1/1 Running 0 17s[root@k8s-company01-master01 ~]# helm version Client: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"} Server: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}[root@k8s-company01-master01 ~]# helm repo list NAME URL stable http://mirror.azure.cn/kubernetes/charts/ local http://127.0.0.1:8879/charts incubator http://mirror.azure.cn/kubernetes/charts-incubator/## 如果要替換倉庫,先移除原先的倉庫 #helm repo remove stable ## 添加新的倉庫地址 #helm repo add stable http://mirror.azure.cn/kubernetes/charts/ #helm repo add incubator http://mirror.azure.cn/kubernetes/charts-incubator/ #helm repo update

使用helm安裝metrics-server(在 master01上執(zhí)行,因為只有 master01裝了 helm)

# 創(chuàng)建 metrics-server-custom.yaml cat >> metrics-server-custom.yaml <<EOF image:repository: reg01.sky-mobi.com/k8s/gcr.io/google_containers/metrics-server-amd64tag: v0.3.1 args:- --kubelet-insecure-tls- --kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP EOF# 安裝 metrics-server(這里 -n 是 name) [root@k8s-master01 ~]# helm install stable/metrics-server -n metrics-server --namespace kube-system --version=2.5.1 -f metrics-server-custom.yaml[root@k8s-company01-master01 ~]# kubectl get pod -n kube-system | grep metrics metrics-server-dcbdb9468-c5f4n 1/1 Running 0 21s# 保存 yaml 文件退出后,metrics-server pod 會自動銷毀原來的,拉起一個新的。新 pod 起來后,過一兩分鐘再執(zhí)行kubectl top命令就有結(jié)果了: [root@k8s-company01-master01 ~]# kubectl top node NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% k8s-company01-master01 404m 5% 1276Mi 4% k8s-company01-master02 493m 6% 1240Mi 3% k8s-company01-master03 516m 6% 1224Mi 3% k8s-company01-worker001 466m 0% 601Mi 0% k8s-company01-worker002 244m 0% 516Mi 0%
  • 使用helm安裝prometheus-operator
# 為方便管理,創(chuàng)建一個單獨的 Namespace monitoring,Prometheus Operator 相關(guān)的組件都會部署到這個 Namespace。kubectl create namespace monitoring## 自定義 prometheus-operator 參數(shù) # helm fetch stable/prometheus-operator --version=5.0.3 --untar # cat prometheus-operator/values.yaml | grep -v '#' | grep -v ^$ > prometheus-operator-custom.yaml # 只保留我們要修改 image 的部分,還有使用 https 連接 etcd,例如: 參考:https://fengxsong.github.io/2018/05/30/Using-helm-to-manage-prometheus-operator/cat >> prometheus-operator-custom.yaml << EOF ## prometheus-operator/values.yaml alertmanager:service:nodePort: 30503type: NodePortalertmanagerSpec:image:repository: reg01.sky-mobi.com/k8s/quay.io/prometheus/alertmanagertag: v0.16.1 prometheusOperator:image:repository: reg01.sky-mobi.com/k8s/quay.io/coreos/prometheus-operatortag: v0.29.0pullPolicy: IfNotPresentconfigmapReloadImage:repository: reg01.sky-mobi.com/k8s/quay.io/coreos/configmap-reloadtag: v0.0.1prometheusConfigReloaderImage:repository: reg01.sky-mobi.com/k8s/quay.io/coreos/prometheus-config-reloadertag: v0.29.0hyperkubeImage:repository: reg01.sky-mobi.com/k8s/k8s.gcr.io/hyperkubetag: v1.12.1pullPolicy: IfNotPresent prometheus:service:nodePort: 30504type: NodePortprometheusSpec:image:repository: reg01.sky-mobi.com/k8s/quay.io/prometheus/prometheustag: v2.7.1secrets: [etcd-client-cert] kubeEtcd:serviceMonitor:scheme: httpsinsecureSkipVerify: falseserverName: ""caFile: /etc/prometheus/secrets/etcd-client-cert/ca.crtcertFile: /etc/prometheus/secrets/etcd-client-cert/healthcheck-client.crtkeyFile: /etc/prometheus/secrets/etcd-client-cert/healthcheck-client.key## prometheus-operator/charts/grafana/values.yaml grafana:service:nodePort: 30505type: NodePortimage:repository: reg01.sky-mobi.com/k8s/grafana/grafanatag: 6.0.2sidecar:image: reg01.sky-mobi.com/k8s/kiwigrid/k8s-sidecar:0.0.13## prometheus-operator/charts/kube-state-metrics/values.yaml kube-state-metrics:image:repository: reg01.sky-mobi.com/k8s/k8s.gcr.io/kube-state-metricstag: v1.5.0## prometheus-operator/charts/prometheus-node-exporter/values.yaml prometheus-node-exporter:image:repository: reg01.sky-mobi.com/k8s/quay.io/prometheus/node-exportertag: v0.17.0 EOF## 注:以上的prometheus-operator/charts/grafana/values.yaml 對應(yīng)項添加了 grafana (按chats 目錄添加的:) #[root@k8s-master01 ~]# ll prometheus-operator/charts/ #total 0 #drwxr-xr-x 4 root root 114 Apr 1 00:48 grafana #drwxr-xr-x 3 root root 96 Apr 1 00:18 kube-state-metrics #drwxr-xr-x 3 root root 110 Apr 1 00:20 prometheus-node-exporter# 創(chuàng)建連接 etcd 的證書secret: kubectl -n monitoring create secret generic etcd-client-cert --from-file=/etc/kubernetes/pki/etcd/ca.crt --from-file=/etc/kubernetes/pki/etcd/healthcheck-client.crt --from-file=/etc/kubernetes/pki/etcd/healthcheck-client.key helm install stable/prometheus-operator --version=5.0.3 --name=monitoring --namespace=monitoring -f prometheus-operator-custom.yaml## 如果想要刪除重來,可以使用 helm 刪除,指定名字 monitoring #helm del --purge monitoring #kubectl delete crd prometheusrules.monitoring.coreos.com #kubectl delete crd servicemonitors.monitoring.coreos.com #kubectl delete crd alertmanagers.monitoring.coreos.com重新安裝 不要刪除之前的,再安裝可能會報錯,用 upgrade 就好: helm upgrade monitoring stable/prometheus-operator --version=5.0.3 --namespace=monitoring -f prometheus-operator-custom.yaml[root@k8s-company01-master01 ~]# kubectl -n monitoring get pod NAME READY STATUS RESTARTS AGE alertmanager-monitoring-prometheus-oper-alertmanager-0 2/2 Running 0 29m monitoring-grafana-7dd5cf9dd7-wx8mz 2/2 Running 0 29m monitoring-kube-state-metrics-7d98487cfc-t6qqw 1/1 Running 0 29m monitoring-prometheus-node-exporter-fnvp9 1/1 Running 0 29m monitoring-prometheus-node-exporter-kczcq 1/1 Running 0 29m monitoring-prometheus-node-exporter-m8kf6 1/1 Running 0 29m monitoring-prometheus-node-exporter-mwc4g 1/1 Running 0 29m monitoring-prometheus-node-exporter-wxmt8 1/1 Running 0 29m monitoring-prometheus-oper-operator-7f96b488f6-2j7h5 1/1 Running 0 29m prometheus-monitoring-prometheus-oper-prometheus-0 3/3 Running 1 28m[root@k8s-company01-master01 ~]# kubectl get svc -n monitoring NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE alertmanager-operated ClusterIP None <none> 9093/TCP,6783/TCP 31m monitoring-grafana NodePort 10.109.159.105 <none> 80:30579/TCP 32m monitoring-kube-state-metrics ClusterIP 10.100.31.235 <none> 8080/TCP 32m monitoring-prometheus-node-exporter ClusterIP 10.109.119.13 <none> 9100/TCP 32m monitoring-prometheus-oper-alertmanager NodePort 10.105.171.135 <none> 9093:31309/TCP 32m monitoring-prometheus-oper-operator ClusterIP 10.98.135.170 <none> 8080/TCP 32m monitoring-prometheus-oper-prometheus NodePort 10.96.15.36 <none> 9090:32489/TCP 32m prometheus-operated ClusterIP None <none> 9090/TCP 31m# 查看有沒有異常告警,alerts里面的第一個Watchdog 是正常的報警,用于監(jiān)控功能探測。 http://172.16.4.200:32489/alerts http://172.16.4.200:32489/targets#以下是安裝 kubernetes-dashboard,用處不大,正式環(huán)境暫時不裝 #helm install --name=kubernetes-dashboard stable/kubernetes-dashboard --version=1.4.0 --namespace=kube-system --set image.repository=reg01.sky-mobi.com/k8s/k8s.gcr.io/kubernetes-dashboard-amd64,image.tag=v1.10.1,rbac.clusterAdminRole=true#Heapter 已在 Kubernetes 1.13 版本中移除(https://github.com/kubernetes/heapster/blob/master/docs/deprecation.md),推薦使用 metrics-server 與 Prometheus。

總結(jié)

以上是生活随笔為你收集整理的kubeadm 安装 k8s 1.14.1版本(HA)的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯,歡迎將生活随笔推薦給好友。

主站蜘蛛池模板: 亚洲欧美强伦一区二区 | 巨乳免费观看 | 少妇搡bbbb搡bbb搡澳门 | 伊人久久精品 | www成人网 | 91视频最新地址 | 欧洲精品久久久久毛片完整版 | 女人十八毛片嫩草av | 欧美日韩在线视频一区二区三区 | 91碰在线视频| 欧美另类国产 | 亚洲精品av中文字幕在线在线 | 婷婷久久亚洲 | 国产精品久久久久久久久久久久久久久 | 激情四月 | 色播网址 | 成人污视频| 国产日本亚洲 | jizzjizz日本人 | 亚洲国产精品综合 | 欧美日韩不卡 | 亚洲国产极品 | 超碰网址 | 国内精品免费视频 | 午夜操一操 | 黄色免费一级视频 | 美女又爽又黄视频 | 色噜噜一区二区 | 最近中文字幕在线 | 国产妇女乱一性一交 | 丰满少妇xoxoxo视频 | √8天堂资源地址中文在线 欧美精品在线一区二区 | 在线观看久草 | 天天综合日日夜夜 | 日韩1级片 | 五月激情婷婷网 | 国产精品久草 | 中文字幕在线看人 | 日韩精品视频网 | 日韩精品中文字 | 西西毛片| 日韩99| 大奶av | 亚洲视频你懂的 | 爱豆国产剧免费观看大全剧集 | 色呦呦在线观看视频 | 孕妇爱爱视频 | 国内精品99 | 亚洲精品高清视频 | 殴美一级黄色片 | 黄色大片儿| 男操女视频网站 | 国产情侣自拍小视频 | wwww黄色片| 欧美亚洲二区 | 西西毛片| 日韩av在线观看免费 | 少妇视频在线观看 | 日韩在线精品视频一区二区涩爱 | 啪啪福利视频 | 人人看超碰 | 男女搞网站| 99情趣网 | 口舌奴vk | 中文字幕人妻色偷偷久久 | 免费无毒av | 欧美性videos高清精品 | ts人妖在线观看 | 野外做受又硬又粗又大视频√ | 都市激情亚洲一区 | 伊人96| 色欲久久久天天天综合网精品 | 骚虎视频在线观看 | 日本不卡高清视频 | 国产秋霞 | av在线大全 | 97免费在线观看 | 天天插天天爽 | 香蕉亚洲 | 少妇人妻偷人精品视频蜜桃 | 啪视频在线 | 国产馆av | 爆操女秘书 | 69xxxx日本| 亚洲视频一二三 | 国产免费看黄 | 91黄在线看 | 日韩a级片在线观看 | 亚洲五月六月 | 国产精品探花视频 | 乱色欧美 | 亚洲免费看片 | 密臀av在线| 香蕉视频网址 | 亚洲免费视频网站 | sese在线| 91网站免费入口 | 好吊视频在线观看 | 国产成人三级在线播放 |