日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

[错误解决]centos中使用kubeadm方式搭建一个单master的K8S集群

發布時間:2024/1/8 编程问答 30 豆豆
生活随笔 收集整理的這篇文章主要介紹了 [错误解决]centos中使用kubeadm方式搭建一个单master的K8S集群 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

安裝步驟

參考該大佬博客

---------

[ningan@k8s-master pv]$ kubectl get pod The connection to the server localhost:8080 was refused - did you specify the right host or port?

切換到root用戶就好了

-----------

錯誤提示

[ERROR FileContent–proc-sys-net-ipv4-ip_forward]: /proc/sys/net/ipv4/ip_forward contents are not set to 1

解決方法

[root@k8s-node1 ningan]# echo "1" > /proc/sys/net/ipv4/ip_forward

錯誤提示

[root@k8s-master ningan]# kubeadm init --apiserver-advertise-address=192.168.11.147 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.18.0 --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16
this version of kubeadm only supports deploying clusters with the control plane version >= 1.19.0. Current version: v1.18.0
To see the stack trace of this error execute with --v=5 or higher

# 錯誤提示 [root@k8s-master ningan]# kubeadm init --apiserver-advertise-address=192.168.11.147 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.18.0 --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16 this version of kubeadm only supports deploying clusters with the control plane version >= 1.19.0. Current version: v1.18.0 To see the stack trace of this error execute with --v=5 or higher

解決方法

查看需要的鏡像源

[root@k8s-master ningan]# kubeadm config images list k8s.gcr.io/kube-apiserver:v1.20.1 k8s.gcr.io/kube-controller-manager:v1.20.1 k8s.gcr.io/kube-scheduler:v1.20.1 k8s.gcr.io/kube-proxy:v1.20.1 k8s.gcr.io/pause:3.2 k8s.gcr.io/etcd:3.4.13-0 k8s.gcr.io/coredns:1.7.0

按照下面這種方式換一個鏡像源先拉取下來

[root@k8s-master ningan]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.20.1 [root@k8s-master ningan]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.20.1 [root@k8s-master ningan]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.20.1 [root@k8s-master ningan]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.20.1 [root@k8s-master ningan]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2 [root@k8s-master ningan]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0 [root@k8s-master ningan]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0

docker images查看docker倉庫中的鏡像,這時大家會發現所有的鏡像都是以registry.aliyuncs.com/google_containers/開頭,這與kubeadm config images list中要求的鏡像名稱不一樣。我們要修改鏡像名稱,即對鏡像重新打個tag

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.20.1 k8s.gcr.io/kube-apiserver:v1.20.1docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.20.1 k8s.gcr.io/kube-controller-manager:v1.20.1docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.20.1 k8s.gcr.io/kube-scheduler:v1.20.1docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.20.1 k8s.gcr.io/kube-proxy:v1.20.1docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2 k8s.gcr.io/pause:3.2docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0 k8s.gcr.io/etcd:3.4.13-0docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0 k8s.gcr.io/coredns:1.7.0 ## 注意:這個是自己第二次部署master節點時,因為版本變成了1.20.2,所以在這記錄了一下,方便下次使用 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.20.2 k8s.gcr.io/kube-apiserver:v1.20.2docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.20.2 k8s.gcr.io/kube-controller-manager:v1.20.2docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.20.2 k8s.gcr.io/kube-scheduler:v1.20.2docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.20.2 k8s.gcr.io/kube-proxy:v1.20.2

之后再次初始化,還是出現了錯誤,接著下一個錯誤繼續解決

錯誤提示

[root@k8s-master ningan]# kubeadm reset && kubeadm init --kubernetes-version=v1.20.1 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap [reset] Reading configuration from the cluster... [reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' W0111 13:07:21.865192 75781 reset.go:99] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: Get "https://192.168.11.147:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s": dial tcp 192.168.11.147:6443: connect: connection refused [reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted. [reset] Are you sure you want to proceed? [y/N]: y [preflight] Running pre-flight checks W0111 13:07:23.838209 75781 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory [reset] Stopping the kubelet service [reset] Unmounting mounted directories in "/var/lib/kubelet" [reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki] [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf] [reset] Deleting contents of stateful directories: [/var/lib/etcd /var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.dThe reset process does not reset or clean up iptables rules or IPVS tables. If you wish to reset iptables, you must do so manually by using the "iptables" command.If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar) to reset your system's IPVS tables.The reset process does not clean your kubeconfig files and you must remove them manually. Please, check the contents of the $HOME/.kube/config file. [init] Using Kubernetes version: v1.20.1 [preflight] Running pre-flight checks[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.11.147] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.11.147 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.11.147 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [kubelet-check] Initial timeout of 40s passed.Unfortunately, an error has occurred:timed out waiting for the conditionThis error is likely caused by:- The kubelet is not running- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:- 'systemctl status kubelet'- 'journalctl -xeu kubelet'Additionally, a control plane component may have crashed or exited when started by the container runtime.To troubleshoot, list all containers using your preferred container runtimes CLI.Here is one example how you may list all Kubernetes containers running in docker:- 'docker ps -a | grep kube | grep -v pause'Once you have found the failing container, you can inspect its logs with:- 'docker logs CONTAINERID'error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster To see the stack trace of this error execute with --v=5 or higher [root@k8s-master ningan]#

解決方法

先參考這個這個鏈接進行了嘗試,還是解決不了
后來參考這個鏈接成功解決,具體方法如下:
就是把/usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf復制到 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

/etc/systemd/system/kubelet.service.d/10-kubeadm.conf中的文件是這樣的:

/usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf中的文件是這樣的:

至于為什么這樣可以解決問題還不清楚,如果有了解的小伙伴歡迎告知。

具體操作如下:

[root@k8s-master ningan]# cd /etc/systemd/system/kubelet.service.d/ [root@k8s-master kubelet.service.d]# ls 10-kubeadm.conf#此步驟是把之前的文件進行保存,萬一不成功,還要恢復回來 [root@k8s-master kubelet.service.d]# cp 10-kubeadm.conf 10-kubeadm.conf_tmp [root@k8s-master kubelet.service.d]# [root@k8s-master kubelet.service.d]# cd /usr/lib/systemd/system/kubelet.service.d[root@k8s-master kubelet.service.d]# [root@k8s-master kubelet.service.d]# cp 10-kubeadm.conf /etc/systemd/system/kubelet.service.d/10-kubeadm.conf cp:是否覆蓋"/etc/systemd/system/kubelet.service.d/10-kubeadm.conf"yes [root@k8s-master kubelet.service.d]# [root@k8s-master kubelet.service.d]# systemctl daemon-reload

再次運行:kubeadm reset && kubeadm init --kubernetes-version=v1.20.1 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap,即可執行成功!

[root@k8s-master kubelet.service.d]# kubeadm reset && kubeadm init --kubernetes-version=v1.20.1 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap [reset] Reading configuration from the cluster... [reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' W0111 13:27:44.374803 80499 reset.go:99] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: Get "https://192.168.11.147:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s": dial tcp 192.168.11.147:6443: connect: connection refused [reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted. [reset] Are you sure you want to proceed? [y/N]: y [preflight] Running pre-flight checks W0111 13:27:45.962138 80499 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory [reset] Stopping the kubelet service [reset] Unmounting mounted directories in "/var/lib/kubelet" [reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki] [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf] [reset] Deleting contents of stateful directories: [/var/lib/etcd /var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.dThe reset process does not reset or clean up iptables rules or IPVS tables. If you wish to reset iptables, you must do so manually by using the "iptables" command.If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar) to reset your system's IPVS tables.The reset process does not clean your kubeconfig files and you must remove them manually. Please, check the contents of the $HOME/.kube/config file. [init] Using Kubernetes version: v1.20.1 [preflight] Running pre-flight checks[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.11.147] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.11.147 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.11.147 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 17.504521 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node k8s-master as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)" [mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: mosffs.9c4krp4qbo7ox0fa [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.11.147:6443 --token mosffs.9c4krp4qbo7ox0fa \--discovery-token-ca-cert-hash sha256:6643599caaa15b516d08d8fa7ec7508e3d9a5224a478651f1380d5d12bbe6416 [root@k8s-master kubelet.service.d]#


太開心了,終于successfully了!

錯誤提示 部署CNI網絡插件

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
因為這個網址我們普通的網絡訪問不了,所以沒有辦法直接從網絡上下載

解決辦法

先自己下載好,上傳到服務器上,命名為kube-flannel.yml,直接利用這個文件即可

[root@k8s-master ningan]# kubectl apply -f kube-flannel.yml

此文件放在文章最后,供有需要的人參考!

錯誤提示 node1和node2 not ready


找了好久,終于被我解決了!

具體思路如下:
先按照上圖提示,master已經是ready狀態了,而node1和node2還是notready狀態;而且看到有問題的init:0/1和ContainerCreating都是node1和node2,所以推測問題肯定在這

先把兩個節點全部都刪除,先加一個進行嘗試:

我們看到有問題的pod是kube-proxy-njlgg,運行如下命令:(–namespace kube-system 一定要加,不加的話看不到具體信息)
(最重要的思路:kubectl describe pod kube-proxy-njlgg --namespace kube-system)

[root@k8s-master ningan]# kubectl describe pod kube-proxy-njlgg --namespace kube-system


如下圖,我們發現這些東西拉取不下來

解決辦法

后來琢磨了半天,覺得應該是node1中缺少了這些鏡像,因為master中這些鏡像已經下載好了,所以就在node1中下載好這些鏡像:
(最重要的解決辦法:在node1中下載好這些鏡像,先從國內源下載,在加tag)

[root@k8s-node1 ningan]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2 [root@k8s-node1 ningan]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2 k8s.gcr.io/pause:3.2 [root@k8s-node1 ningan]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.20.1 [root@k8s-node1 ningan]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.20.1 k8s.gcr.io/kube-proxy:v1.20.1

這時,master中:如果不行,就多輸幾遍命令,然后把kube-flannel.yml刪了再加進去,把node1刪了再加進去,因為有一個時間的問題,所以過一會就好了!



node2按照同樣的方法進行操作,也成功啦!



部署nginx,測試是否成功

錯誤提示

error execution phase preflight: couldn’t validate the identity of the API Server: Get “https://192.168.11.152:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s”: x509: certificate has expired or is not yet valid: current time 2021-01-14T15:21:21+08:00 is before 2021-01-14T08:52:57Z To see the stack trace of this error execute with --v=5 or higher

解決辦法

同步master和node的時間,在master和node中都執行:

# 時間同步 yum install ntpdate -y ntpdate time.windows.com

問題 token過期怎么辦?

# 之前的 kubeadm join 192.168.11.152:6443 --token 2ee4bk.dmlglcduipwk7wyg \--discovery-token-ca-cert-hash sha256:7c205406380f65a0dcb365b5fcb51a510488e833a4b5441052180133acde1e8b# 重新生成token [root@k8s-master ningan]# kubeadm token create 33g27f.2g8khuprb54p8fdb [root@k8s-master ningan]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //' 7c205406380f65a0dcb365b5fcb51a510488e833a4b5441052180133acde1e8b# 新的 最好在前面加上 kubeadm reset kubeadm reset && kubeadm join 192.168.11.152:6443 --token 33g27f.2g8khuprb54p8fdb \--discovery-token-ca-cert-hash sha256:7c205406380f65a0dcb365b5fcb51a510488e833a4b5441052180133acde1e8b

問題:The connection to the server localhost:8080 was refused - did you specify the right host or port?

改為root用戶再執行就可以了

-----------

問題:ip地址變了怎么辦?二次安裝

在root用戶下修改host文件:改ip地址

[ningan@k8s-master ~]$ su [root@k8s-master ningan]# vim /etc/hosts

部署Kubernetes Master節點

[root@k8s-master ningan]# kubeadm init --apiserver-advertise-address=192.168.11.155 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.20.1 --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16

出現問題:
[ERROR Port-6443]: Port 6443 is in use
[ERROR Port-10259]: Port 10259 is in use
[ERROR Port-10257]: Port 10257 is in use
[ERROR FileAvailable–etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
[ERROR FileAvailable–etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
[ERROR FileAvailable–etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
[ERROR FileAvailable–etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
[ERROR Port-10250]: Port 10250 is in use
[ERROR Port-2379]: Port 2379 is in use
[ERROR Port-2380]: Port 2380 is in use
[ERROR DirAvailable–var-lib-etcd]: /var/lib/etcd is not empty

加入kubeadm reset,重新部署

[root@k8s-master ningan]# kubeadm reset && kubeadm init --apiserver-advertise-address=192.168.11.155 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.20.1 --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16

master節點部署成功!

在master節點執行:

mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config

發現master節點已經處于ready狀態

在node1節點上執行(node1和node2同步執行,因為他們的角色是一樣的)

[root@k8s-node1 ningan]# kubeadm join 192.168.11.155:6443 --token cwxtdy.9u4jljdiy6raxlkw \ > --discovery-token-ca-cert-hash sha256:bdd3952d858d19f46e6d2a281d3596c7f4d6b0850e02c92499735d03968e8bb5

出現錯誤:

建議:只要看到這種already exists的,都在前面加上kubeadm reset

加上之后,繼續在node1上執行:

[root@k8s-node1 ningan]# kubeadm reset && kubeadm join 192.168.11.155:6443 --token cwxtdy.9u4jljdiy6raxlkw --discovery-token-ca-cert-hash sha256:bdd3952d858d19f46e6d2a281d3596c7f4d6b0850e02c92499735d03968e8bb5

你會發現:它會一直卡在 [WARNING Hostname]: hostname “k8s-node1”: lookup k8s-node1 on 192.168.11.2:53: no such host,過了好久好久(不知道有多久),會提示:error execution phase preflight: couldn’t validate the identity of the API Server: Get “https://192.168.11.155:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s”: x509: certificate has expired or is not yet valid: current time 2021-01-15T05:22:28+08:00 is before 2021-01-16T01:46:52Z
To see the stack trace of this error execute with --v=5 or higher

這個問題是因為時間沒有同步造成的,在master、node1和node2都執行時間同步

# 時間同步 yum install ntpdate -y #選擇性執行 ntpdate time.windows.com

之后在node1上再次執行:

[root@k8s-node1 ningan]# kubeadm reset && kubeadm join 192.168.11.155:6443 --token cwxtdy.9u4jljdiy6raxlkw --discovery-token-ca-cert-hash sha256:bdd3952d858d19f46e6d2a281d3596c7f4d6b0850e02c92499735d03968e8bb5

執行成功!

回到master節點查看:都已經部署成功!大功告成!

-----------

kube-flannel.yml

https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml 該網站的內容:

--- apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata:name: psp.flannel.unprivilegedannotations:seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/defaultseccomp.security.alpha.kubernetes.io/defaultProfileName: docker/defaultapparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/defaultapparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default spec:privileged: falsevolumes:- configMap- secret- emptyDir- hostPathallowedHostPaths:- pathPrefix: "/etc/cni/net.d"- pathPrefix: "/etc/kube-flannel"- pathPrefix: "/run/flannel"readOnlyRootFilesystem: false# Users and groupsrunAsUser:rule: RunAsAnysupplementalGroups:rule: RunAsAnyfsGroup:rule: RunAsAny# Privilege EscalationallowPrivilegeEscalation: falsedefaultAllowPrivilegeEscalation: false# CapabilitiesallowedCapabilities: ['NET_ADMIN', 'NET_RAW']defaultAddCapabilities: []requiredDropCapabilities: []# Host namespaceshostPID: falsehostIPC: falsehostNetwork: truehostPorts:- min: 0max: 65535# SELinuxseLinux:# SELinux is unused in CaaSPrule: 'RunAsAny' --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata:name: flannel rules: - apiGroups: ['extensions']resources: ['podsecuritypolicies']verbs: ['use']resourceNames: ['psp.flannel.unprivileged'] - apiGroups:- ""resources:- podsverbs:- get - apiGroups:- ""resources:- nodesverbs:- list- watch - apiGroups:- ""resources:- nodes/statusverbs:- patch --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata:name: flannel roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: flannel subjects: - kind: ServiceAccountname: flannelnamespace: kube-system --- apiVersion: v1 kind: ServiceAccount metadata:name: flannelnamespace: kube-system --- kind: ConfigMap apiVersion: v1 metadata:name: kube-flannel-cfgnamespace: kube-systemlabels:tier: nodeapp: flannel data:cni-conf.json: |{"name": "cbr0","cniVersion": "0.3.1","plugins": [{"type": "flannel","delegate": {"hairpinMode": true,"isDefaultGateway": true}},{"type": "portmap","capabilities": {"portMappings": true}}]}net-conf.json: |{"Network": "10.244.0.0/16","Backend": {"Type": "vxlan"}} --- apiVersion: apps/v1 kind: DaemonSet metadata:name: kube-flannel-dsnamespace: kube-systemlabels:tier: nodeapp: flannel spec:selector:matchLabels:app: flanneltemplate:metadata:labels:tier: nodeapp: flannelspec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: kubernetes.io/osoperator: Invalues:- linuxhostNetwork: truepriorityClassName: system-node-criticaltolerations:- operator: Existseffect: NoScheduleserviceAccountName: flannelinitContainers:- name: install-cniimage: quay.io/coreos/flannel:v0.13.1-rc1command:- cpargs:- -f- /etc/kube-flannel/cni-conf.json- /etc/cni/net.d/10-flannel.conflistvolumeMounts:- name: cnimountPath: /etc/cni/net.d- name: flannel-cfgmountPath: /etc/kube-flannel/containers:- name: kube-flannelimage: quay.io/coreos/flannel:v0.13.1-rc1command:- /opt/bin/flanneldargs:- --ip-masq- --kube-subnet-mgrresources:requests:cpu: "100m"memory: "50Mi"limits:cpu: "100m"memory: "50Mi"securityContext:privileged: falsecapabilities:add: ["NET_ADMIN", "NET_RAW"]env:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespacevolumeMounts:- name: runmountPath: /run/flannel- name: flannel-cfgmountPath: /etc/kube-flannel/volumes:- name: runhostPath:path: /run/flannel- name: cnihostPath:path: /etc/cni/net.d- name: flannel-cfgconfigMap:name: kube-flannel-cfg

總結

以上是生活随笔為你收集整理的[错误解决]centos中使用kubeadm方式搭建一个单master的K8S集群的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。