日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

kubeadm安装K8S单master双节点集群

發(fā)布時間:2024/1/17 编程问答 31 豆豆
生活随笔 收集整理的這篇文章主要介紹了 kubeadm安装K8S单master双节点集群 小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.

宿主機:
master:172.16.40.97
node1:172.16.40.98
node2:172.16.40.99

# 一、k8s初始化環(huán)境:(三臺宿主機)

關(guān)閉防火墻和selinux

systemctl stop firewalld && systemctl disable firewalld sed -ri '/^[^#]*SELINUX=/s#=.+$#=disabled#' /etc/selinux/config setenforce 0

設(shè)置時間同步客戶端

yum install chrony -y cat <<EOF > /etc/chrony.conf server ntp.aliyun.com iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey logchange 0.5 logdir /var/log/chrony EOF systemctl restart chronyd && systemctl enable chronyd

各主機之間相互DNS解析和ssh登錄

升級內(nèi)核

wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo yum install wget git jq psmisc -y wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo yum install https://mirrors.aliyun.com/saltstack/yum/redhat/salt-repo-latest-2.el7.noarch.rpm sed -i "s/repo.saltstack.com/mirrors.aliyun.com\/saltstack/g" /etc/yum.repos.d/salt-latest.repo yum update -y

更新重啟

自選版本

export Kernel_Vsersion=4.18.9-1 wget http://mirror.rc.usf.edu/compute_lock/elrepo/kernel/el7/x86_64/RPMS/kernel-ml{,-devel}-${Kernel_Vsersion}.el7.elrepo.x86_64.rpm yum localinstall -y kernel-ml*

查看這個內(nèi)核里是否有這個內(nèi)核模塊

find /lib/modules -name '*nf_conntrack_ipv4*' -type f

修改內(nèi)核啟動順序,默認啟動的順序應(yīng)該為1,升級以后內(nèi)核是往前面插入,為0(如果每次啟動時需要手動選擇哪個內(nèi)核,該步驟可以省略)

grub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfg

使用下面命令看看確認下是否啟動默認內(nèi)核指向上面安裝的內(nèi)核

grubby --default-kernel

docker官方的內(nèi)核檢查腳本建議(RHEL7/CentOS7: User namespaces disabled; add ‘user_namespace.enable=1’ to boot command line),使用下面命令開啟

grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"

重新加載內(nèi)核

reboot

需要設(shè)定/etc/sysctl.d/k8s.conf的系統(tǒng)參數(shù)

cat <<EOF > /etc/sysctl.d/k8s.conf net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 fs.may_detach_mounts = 1 vm.overcommit_memory=1 vm.panic_on_oom=0 fs.inotify.max_user_watches=89100 fs.file-max=52706963 fs.nr_open=52706963 net.netfilter.nf_conntrack_max=2310720 EOF sysctl --system

檢查系統(tǒng)內(nèi)核和模塊是否適合運行 docker (僅適用于 linux 系統(tǒng))

curl https://raw.githubusercontent.com/docker/docker/master/contrib/check-config.sh > check-config.sh bash ./check-config.sh

安裝docker-ce

yum install -y yum-utils device-mapper-persistent-data lvm2 yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo yum makecache fast yum install docker-ce-17.06.2.ce -y sed -i "13i ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT" /usr/lib/systemd/system/docker.service systemctl daemon-reload && systemctl enable docker && systemctl start docker

設(shè)置docker開機啟動,CentOS安裝完成后docker需要手動設(shè)置docker命令補全

yum install -y epel-release bash-completion && cp /usr/share/bash-completion/completions/docker /etc/bash_completion.d/ systemctl enable --now docker

在各節(jié)點上下載k8s1.13.2版本的對應(yīng)官方鏡像包,網(wǎng)盤地址:? https://pan.baidu.com/s/1NETu4uZrd5ijjXICARNe5A? ?密碼:4oco

#二、安裝k8s集群**

三臺宿主機進行kubectl kubelet kubeadm安裝:

cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF yum install -y kubelet kubeadm kubectl systemctl enable kubelet

master宿主機忽略交換分區(qū)未關(guān)閉warning:

cat <<EOF > /etc/sysconfig/kubelet KUBELET_EXTRA_ARGS="--fail-swap-on=false --cgroup-driver=cgroupfs" EOFsystemctl daemon-reload

?

master節(jié)點進行kubeadm初始化

kubeadm init --kubernetes-version=v1.13.2 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/16 --ignore-preflight-errors=Swap *[init] Using Kubernetes version: v1.13.2 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using ‘kubeadm config images pull’ [kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env” [kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml” [kubelet-start] Activating the kubelet service [certs] Using certificateDir folder “/etc/kubernetes/pki” [certs] Generating “ca” certificate and key [certs] Generating “apiserver-kubelet-client” certificate and key [certs] Generating “apiserver” certificate and key [certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.16.40.97] [certs] Generating “front-proxy-ca” certificate and key [certs] Generating “front-proxy-client” certificate and key [certs] Generating “etcd/ca” certificate and key [certs] Generating “etcd/server” certificate and key [certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [172.16.40.97 127.0.0.1 ::1] [certs] Generating “etcd/peer” certificate and key [certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [172.16.40.97 127.0.0.1 ::1] [certs] Generating “etcd/healthcheck-client” certificate and key [certs] Generating “apiserver-etcd-client” certificate and key [certs] Generating “sa” key and public key [kubeconfig] Using kubeconfig folder “/etc/kubernetes” [kubeconfig] Writing “admin.conf” kubeconfig file [kubeconfig] Writing “kubelet.conf” kubeconfig file [kubeconfig] Writing “controller-manager.conf” kubeconfig file [kubeconfig] Writing “scheduler.conf” kubeconfig file [control-plane] Using manifest folder “/etc/kubernetes/manifests” [control-plane] Creating static Pod manifest for “kube-apiserver” [control-plane] Creating static Pod manifest for “kube-controller-manager” [control-plane] Creating static Pod manifest for “kube-scheduler” [etcd] Creating static Pod manifest for local etcd in “/etc/kubernetes/manifests” [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”. This can take up to 4m0s [apiclient] All control plane components are healthy after 20.003620 seconds [uploadconfig] storing the configuration used in ConfigMap “kubeadm-config” in the “kube-system” Namespace [kubelet] Creating a ConfigMap “kubelet-config-1.13” in namespace kube-system with the configuration for the kubelets in the cluster [patchnode] Uploading the CRI Socket information “/var/run/dockershim.sock” to the Node API object “master” as an annotation [mark-control-plane] Marking the node master as control-plane by adding the label “node-role.kubernetes.io/master=’’” [mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: 2s9xxt.8lgyw6yzt21qq8xf [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the “cluster-info” ConfigMap in the “kube-public” namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxyYour Kubernetes master has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster. Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of machines by running the following on each node as root:kubeadm join 172.16.40.97:6443 –token 2s9xxt.8lgyw6yzt21qq8xf –discovery-token-ca-cert-hash sha256:c141fb0608b4b83136272598d2623589d73546762abc987391479e8e049b0d76*

master節(jié)點用kubectl訪問集群

mkdir -p $HOME/.kube cp -i /etc/kubernetes/admin.conf $HOME/.kube/config chown $(id -u):$(id -g) $HOME/.kube/config

master節(jié)點拉取github配置文件

git clone https://github.com/sky-daiji/k8s-install.git

接下來我們來安裝flannel網(wǎng)絡(luò)插件

cd /root/k8s-install kubectl apply -f kube-flannel/

master節(jié)點查看集群狀態(tài)

[root@master ~]# kubectl get cs NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health": "true"}

添加各節(jié)點進去集群

kubeadm join 172.16.40.97:6443 --token 2s9xxt.8lgyw6yzt21qq8xf --discovery-token-ca-cert-hash sha256:c141fb0608b4b83136272598d2623589d73546762abc987391479e8e049b0d76

查看節(jié)點是否都添加到集群里

[root@master ~]# kubectl get node NAME STATUS ROLES AGE VERSION master Ready master 15m v1.13.2 node1 Ready <none> 13m v1.13.2 node2 Ready <none> 13m v1.13.2

查看k8s各自組件運行情況

在所有節(jié)點啟用ipvs模塊

yum install -y ipvsadm vim /etc/sysconfig/modules/ipvs.modules #!/bin/bash modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack_ipv4chmod +x /etc/sysconfig/modules/ipvs.modulessource /etc/sysconfig/modules/ipvs.moduleslsmod | grep -e ip_vs -enf_conntrack_ipv4kubectl edit cm kube-proxy -n kube-system 將mode修改為ipvskubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod"$1" -n kube-system")}' 批量刪除并重建kube-proxy

  

安裝kuber-dashboard插件

cd /root/k8s-install kubectl apply -f kubernetes-dashboard/

查看kubernetes-dashboard插件安裝是否成功

kubectl get pod -n kube-system |grep kubernetes-dashboard

訪問Dashboard

https://172.16.40.97:30091
選擇Token令牌模式登錄。

kubectl describe secret/$(kubectl get secret -n kube-system |grep admin|awk '{print $1}') -n kube-system

?

在master節(jié)點上安裝heapster,從v1.11.0開始,性能采集不再采用heapster采集pod性能數(shù)據(jù),而是使用metrics-server,但是dashboard依然使用heapster呈現(xiàn)性能數(shù)據(jù)

cd /root/k8s-install kubectl apply -f heapster/

?

?

安裝metrics-server

cd /root/k8s-install kubectl apply -f metrics-server/# 等待5分鐘,查看性能數(shù)據(jù)是否正常收集
[root@master01 ~]# kubectl top pods -n kube-system NAME CPU(cores) MEMORY(bytes) coredns-86c58d9df4-n5brl 2m 15Mi coredns-86c58d9df4-rhl5d 2m 20Mi etcd-master01 14m 97Mi heapster-c8847db7d-rw845 1m 40Mi kube-apiserver-master01 21m 553Mi kube-controller-manager-master01 23m 95Mi kube-flannel-ds-amd64-bh5dm 2m 11Mi kube-flannel-ds-amd64-bzfnm 2m 17Mi kube-flannel-ds-amd64-clrmd 2m 14Mi kube-proxy-cgcqj 3m 21Mi kube-proxy-lrzh7 3m 24Mi kube-proxy-wkgjq 3m 18Mi kube-scheduler-master01 6m 20Mi kubernetes-dashboard-57df4db6b-tzvcc 1m 22Mi metrics-server-9d78d4d64-zjv4z 1m 28Mi monitoring-grafana-b4c79dbd4-bzk9r 1m 29Mi monitoring-influxdb-576db68c87-57sg7 1m 74Mi

  

?安裝prometheus

cd /root/k8s-install kubectl apply -f prometheus/

成功安裝后訪問以下網(wǎng)址打開prometheus管理界面,查看相關(guān)性能采集數(shù)據(jù):?http://172.16.40.97:30013/

?

成功安裝后訪問以下網(wǎng)址打開grafana管理界面(賬號密碼都是admin),查看相關(guān)性能采集數(shù)據(jù):?http://172.16.40.97:30006登錄后,進入datasource設(shè)置界面,增加prometheus數(shù)據(jù)源,

?

進入導(dǎo)入dashboard界面:?http://172.16.40.97:30006/dashboard/import?導(dǎo)入heapster/grafana-dashboard目錄下的dashboard?Kubernetes App Metrics和Kubernetes cluster monitoring (via Prometheus)

?

?

?

?

?

?

如果你覺得這份文檔對你有幫助,請支付寶掃描下方的二維碼進行捐贈,謝謝!

?

?

?

?

?

?

轉(zhuǎn)載于:https://www.cnblogs.com/skymydaiji/p/skymydaiji.html

總結(jié)

以上是生活随笔為你收集整理的kubeadm安装K8S单master双节点集群的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯,歡迎將生活随笔推薦給好友。