K8S——单master节点和基于单master节点的双master节点二进制部署(本机实验,防止卡顿,所以多master就不做3台了)
生活随笔
收集整理的這篇文章主要介紹了
K8S——单master节点和基于单master节点的双master节点二进制部署(本机实验,防止卡顿,所以多master就不做3台了)
小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.
K8S——單master節(jié)點(diǎn)和基于單master節(jié)點(diǎn)的雙master節(jié)點(diǎn)二進(jìn)制部署
- 一、準(zhǔn)備
- 二、ETCD集群
- 1、master節(jié)點(diǎn)
- 2、node節(jié)點(diǎn)
- 三、Flannel網(wǎng)絡(luò)部署
- 四、測試容器間互通
- 五、單master節(jié)點(diǎn)部署
- 1、部署master組件
- 2、node節(jié)點(diǎn)部署
- ①、node1節(jié)點(diǎn)
- ②、node2節(jié)點(diǎn)
- 六、基于上一步單master節(jié)點(diǎn)的雙master節(jié)點(diǎn)部署
- 1、搭建master2節(jié)點(diǎn)
- 2、nginx負(fù)載均衡部署
- 3、node節(jié)點(diǎn)配置
- 七、測試
- 1、master1上進(jìn)行操作
- 2、node1節(jié)點(diǎn)curl測試
一、準(zhǔn)備
| k8s-master | centos7:1708 | 192.168.184.140 |
| k8s-master2 | centos7:1708 | 192.168.184.145 |
| k8s-node01 | centos7:1708 | 192.168.184.141 |
| k8s-node02 | centos7:1708 | 192.168.184.142 |
| nginx_lbm IP | nginx | 192.168.184.146 |
| nginx_lbb IP | nginx | 192.168.184.147 |
| VIP IP | 192.168.184.200 |
二、ETCD集群
1、master節(jié)點(diǎn)
#創(chuàng)建/k8s目錄 mkdir k8s cd k8s#創(chuàng)建證書制作的腳本 vim etcd-cert.sh cat > ca-config.json <<EOF #CA證書配置文件 {"signing": { #鍵名稱"default": {"expiry": "87600h" #證書有效期(10年)},"profiles": { #簡介"www": { #名稱"expiry": "87600h","usages": [ #使用方法"signing", #鍵"key encipherment", #密鑰驗(yàn)證(密鑰驗(yàn)證要設(shè)置在CA證書中)"server auth", #服務(wù)器端驗(yàn)證"client auth" #客戶端驗(yàn)證]}}} } EOF cat > ca-csr.json <<EOF #CA簽名 {"CN": "etcd CA", #CA簽名為etcd指定(三個(gè)節(jié)點(diǎn)均需要)"key": {"algo": "rsa", #使用rsa非對稱密鑰的形式"size": 2048 #密鑰長度為2048},"names": [ #在證書中定義信息(標(biāo)準(zhǔn)格式){"C": "CN", #名稱"L": "Beijing", "ST": "Beijing" }] } EOF cfssl gencert -initca ca-csr.json | cfssljson -bare ca - cat > server-csr.json <<EOF #服務(wù)器端的簽名 {"CN": "etcd","hosts": [ #定義三個(gè)節(jié)點(diǎn)的IP地址"192.168.184.140","192.168.184.141","192.168.184.142"],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "BeiJing","ST": "BeiJing"}] } EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server #cfssl 為證書制作工具#創(chuàng)建啟動(dòng)腳本 cat etcd.sh #!/bin/bash #以下為使用格式:etcd名稱 當(dāng)前etcd的IP地址+完整的集群名稱和地址 # example: ./etcd.sh etcd01 192.168.1.10 etcd02=https://192.168.1.11:2380,etcd03=https://192.168.1.12:2380 ETCD_NAME=$1 #位置變量1:etcd節(jié)點(diǎn)名稱 ETCD_IP=$2 #位置變量2:節(jié)點(diǎn)地址 ETCD_CLUSTER=$3 #位置變量3:集群 WORK_DIR=/opt/etcd #指定工作目錄 cat <<EOF >$WORK_DIR/cfg/etcd #在指定工作目錄創(chuàng)建ETCD的配置文件 #[Member] ETCD_NAME="${ETCD_NAME}" #etcd名稱 ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://${ETCD_IP}:2380" #etcd IP地址:2380端口。用于集群之間通訊 ETCD_LISTEN_CLIENT_URLS="https://${ETCD_IP}:2379" #etcd IP地址:2379端口,用于開放給外部客戶端通訊 #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://${ETCD_IP}:2380" ETCD_ADVERTISE_CLIENT_URLS="https://${ETCD_IP}:2379" #對外提供的url使用https的協(xié)議進(jìn)行訪問 ETCD_INITIAL_CLUSTER="etcd01=https://${ETCD_IP}:2380,${ETCD_CLUSTER}" #多路訪問 ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" #tokens 令牌環(huán)名稱:etcd-cluster ETCD_INITIAL_CLUSTER_STATE="new" #狀態(tài),重新創(chuàng)建 EOF cat <<EOF >/usr/lib/systemd/system/etcd.service #定義ectd的啟動(dòng)腳本 [Unit] #基本項(xiàng) Description=Etcd Server #類似為 etcd 服務(wù) After=network.target #vu癌癥 After=network-online.target Wants=network-online.target [Service] #服務(wù)項(xiàng) Type=notify EnvironmentFile=${WORK_DIR}/cfg/etcd #etcd文件位置 ExecStart=${WORK_DIR}/bin/etcd \ #準(zhǔn)啟動(dòng)狀態(tài)及以下的參數(shù) --name=\${ETCD_NAME} \ --data-dir=\${ETCD_DATA_DIR} \ --listen-peer-urls=\${ETCD_LISTEN_PEER_URLS} \ --listen-client-urls=\${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \ --advertise-client-urls=\${ETCD_ADVERTISE_CLIENT_URLS} \ #以下為群集內(nèi)部的設(shè)定 --initial-advertise-peer-urls=\${ETCD_INITIAL_ADVERTISE_PEER_URLS} \ --initial-cluster=\${ETCD_INITIAL_CLUSTER} \ --initial-cluster-token=\${ETCD_INITIAL_CLUSTER_TOKEN} \ #群集內(nèi)部通信,也是使用的令牌,為了保證安全(防范中間人竊取) --initial-cluster-state=new \ --cert-file=${WORK_DIR}/ssl/server.pem \ #證書相關(guān)參數(shù) --key-file=${WORK_DIR}/ssl/server-key.pem \ --peer-cert-file=${WORK_DIR}/ssl/server.pem \ --peer-key-file=${WORK_DIR}/ssl/server-key.pem \ --trusted-ca-file=${WORK_DIR}/ssl/ca.pem \ --peer-trusted-ca-file=${WORK_DIR}/ssl/ca.pem Restart=on-failure LimitNOFILE=65536 #開放最多的端口號 [Install] WantedBy=multi-user.target #進(jìn)行啟動(dòng) EOF systemctl daemon-reload #參數(shù)重載 systemctl enable etcd systemctl restart etcd#創(chuàng)建證書目錄,復(fù)制k8s目錄下的證書創(chuàng)建腳本 mkdir etcd-cert cd etcd-cert/ mv ../etcd-cert.sh ./#從官網(wǎng)源中下載制作證書的工具 curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo#執(zhí)行證書制作腳本(etcd-cert目錄下) chmod +x /usr/local/bin/cfssl chmod +x /usr/local/bin/cfssl-certinfo chmod +x /usr/local/bin/cfssljson bash etcd-cert.sh#ETCD 部署 (下載并將軟件包放在k8s目錄下:etcd-v3.3.10-linux-amd64.tar.gz、flannel-v0.10.0-linux-amd64.tar.gz、kubernetes-server-linux-amd64.tar.gz)#解壓etcd-v3.3.10-linux-amd64.tar.gz cd /etcd tar zxvf etcd-v3.3.10-linux-amd64.tar.gz#創(chuàng)建ETCD工作目錄(cfg:配置文件目錄、bin:命令文件目錄、ssl:證書文件目錄) mkdir /opt/etcd/{cfg,bin,ssl} -p#拷貝命令文件 mv etcd-v3.3.10-linux-amd64/etcd etcd-v3.3.10-linux-amd64/etcdctl /opt/etcd/bin#拷貝證書文件 cp etcd-cert/*.pem /opt/etcd/ssl#進(jìn)入卡住狀態(tài)等待其他節(jié)點(diǎn)加入 bash etcd.sh etcd01 192.168.184.140 etcd02=https://192.168.184.141:2380,etcd03=https://192.168.184.142:2380
2、node節(jié)點(diǎn)
#查看、修改配置文件 ls /usr/lib/systemd/system/ | grep etcd vim /opt/etcd/cfg/etcd #[Member] ETCD_NAME="etcd02" #需修改節(jié)點(diǎn)名稱 ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://192.168.184.141:2380" #將url:2380端口的IP地址改為141(本地節(jié)點(diǎn)IP) ETCD_LISTEN_CLIENT_URLS="https://192.168.184.141:2379" #將url:2379端口的IP地址改為141(本地節(jié)點(diǎn)IP) #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.184.141:2380" ETCD_ADVERTISE_CLIENT_URLS="https://192.168.184.141:2379" #以上兩條選項(xiàng)的地址也改為本地IP ETCD_INITIAL_CLUSTER="etcd01=https://192.168.184.140:2380,etcd02=https://192.168.184.141:2380,etcd03=https://192.168.184.142:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" #同理修改node2節(jié)點(diǎn)配置文件#啟動(dòng)服務(wù)(先在master節(jié)點(diǎn)使用命令,開啟等待節(jié)點(diǎn)加入,其他兩個(gè)node節(jié)點(diǎn)啟動(dòng)etcd 服務(wù)) [root@k8s-master ~/k8s]# bash etcd.sh etcd01 192.168.184.140 etcd02=https://192.168.184.141:2380,etcd03=https://192.168.184.142:2380 [root@k8s-node01 /opt/etcd/cfg]# systemctl start etcd [root@k8s-node02 /opt/etcd/cfg]# systemctl start etcd #檢查集群狀態(tài)(master上執(zhí)行) [root@k8s-master ~/k8s]# cd etcd-cert/ [root@k8s-master ~/k8s/etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.184.140:2379,https://192.168.184.141:2379,https://192.168.184.142:2379" cluster-health三、Flannel網(wǎng)絡(luò)部署
#首先兩個(gè)node節(jié)點(diǎn)需要先安裝docker引擎,具體流程可見:docker容器簡介及安裝
#寫入分配的子網(wǎng)段到ETCD中,供flannel使用(master主機(jī)) /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.184.140:2379,https://192.168.184.141:2379,https://192.168.184.142:2379" set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}' #命令簡介-------------------------------------------------- #使用etcdctl命令,借助ca證書,目標(biāo)斷點(diǎn)為三個(gè)ETCD節(jié)點(diǎn)IP,端口為2379 #set /coreos.com/network/config 設(shè)置網(wǎng)段信息 #"Network": "172.17.0.0/16" 此網(wǎng)段必須是集合網(wǎng)段(B類地址),而Pod分配的資源必須在此網(wǎng)段中的子網(wǎng)段(C類地址) #"Backend": {"Type": "vxlan"}} 外部通訊的類型是VXLAN ----------------------------------------------------------#查看寫入的信息(master主機(jī)) /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.184.140:2379,https://192.168.184.141:2379,https://192.168.184.142:2379" get /coreos.com/network/config#上傳flannel軟件包到所有的 node 節(jié)點(diǎn)并解壓(所有node節(jié)點(diǎn)) tar zxvf flannel-v0.10.0-linux-amd64.tar.gz #創(chuàng)建k8s工作目錄(所有node節(jié)點(diǎn)) mkdir /opt/kubernetes/{cfg,bin,ssl} -p mv mk-docker-opts.sh flanneld /opt/kubernetes/bin/#創(chuàng)建啟動(dòng)腳本(兩個(gè)node節(jié)點(diǎn)) vim flannel.sh #!/bin/bash ETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"} cat <<EOF >/opt/kubernetes/cfg/flanneld #創(chuàng)建配置文件 FLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} \ #flannel在使用的時(shí)候需要參照CA證書 -etcd-cafile=/opt/etcd/ssl/ca.pem \ -etcd-certfile=/opt/etcd/ssl/server.pem \ -etcd-keyfile=/opt/etcd/ssl/server-key.pem" EOF cat <<EOF >/usr/lib/systemd/system/flanneld.service #創(chuàng)建啟動(dòng)腳本 [Unit] Description=Flanneld overlay address etcd agent After=network-online.target network.target Before=docker.service [Service] Type=notify EnvironmentFile=/opt/kubernetes/cfg/flanneld ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env #Docker使用的網(wǎng)絡(luò)是flannel提供的 Restart=on-failure [Install] WantedBy=multi-user.target #多用戶模式 EOF systemctl daemon-reload systemctl enable flanneld systemctl restart flanneld#開啟flannel網(wǎng)絡(luò)功能(兩個(gè)node節(jié)點(diǎn)) bash flannel.sh https://192.168.184.140:2379,https://192.168.184.141:2379,https://192.168.184.142:2379#配置 docker 連接 flannel(兩個(gè)node節(jié)點(diǎn)) vim /usr/lib/systemd/system/docker.service -----12行添加 EnvironmentFile=/run/flannel/subnet.env -----13行修改(添加參數(shù)$DOCKER_NETWORK_OPTIONS) ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run/containerd/containerd.sock#查看flannel分配的子網(wǎng)段 cat /run/flannel/subnet.env #重載進(jìn)程、重啟docker systemctl daemon-reload systemctl restart docker
四、測試容器間互通
五、單master節(jié)點(diǎn)部署
1、部署master組件
#創(chuàng)建k8s工作目錄和apiserver的證書目錄 cd ~/k8s mkdir /opt/kubernetes/{cfg,bin,ssl} -p mkdir k8s-cert#生成證書 cd k8s-cert vim k8s-cert.sh cat > ca-config.json <<EOF {"signing": {"default": {"expiry": "87600h"},"profiles": {"kubernetes": {"expiry": "87600h","usages": ["signing","key encipherment","server auth","client auth"]}}} } EOFcat > ca-csr.json <<EOF {"CN": "kubernetes","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "Beijing","ST": "Beijing","O": "k8s","OU": "System"}] } EOF cfssl gencert -initca ca-csr.json | cfssljson -bare ca -cat > server-csr.json <<EOF {"CN": "kubernetes","hosts": ["10.0.0.1","127.0.0.1","192.168.184.140", #master1節(jié)點(diǎn)"192.168.184.145", #master2節(jié)點(diǎn)(為之后做多節(jié)點(diǎn)做準(zhǔn)備)"192.168.184.200", #VIP飄逸地址"192.168.184.146", #nginx1負(fù)載均衡地址(主)"192.168.184.147", #nginx2負(fù)載均衡地址(備)"kubernetes","kubernetes.default","kubernetes.default.svc","kubernetes.default.svc.cluster","kubernetes.default.svc.cluster.local"],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "BeiJing","ST": "BeiJing","O": "k8s","OU": "System"}] } EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare servercat > admin-csr.json <<EOF {"CN": "admin","hosts": [],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "BeiJing","ST": "BeiJing","O": "system:masters","OU": "System"}] } EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admincat > kube-proxy-csr.json <<EOF {"CN": "system:kube-proxy","hosts": [],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","L": "BeiJing","ST": "BeiJing","O": "k8s","OU": "System"}] } EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy#直接執(zhí)行腳本生成K8S的證書 bash k8s-cert.sh#此時(shí)查看本地目錄的證書文件,應(yīng)該有8個(gè) ls *.pem#把ca server端的證書復(fù)制到k8s工作目錄 cp ca*.pem server*.pem /opt/kubernetes/ssl ls /opt/kubernetes/ssl/#解壓kubernetes壓縮包 cd ../ tar zxvf kubernetes-server-linux-amd64.tar.gz#復(fù)制關(guān)鍵命令到k8s的工作目錄中 cd kubernetes/server/bin cp kube-controller-manager kubectl kube-apiserver kube-scheduler /opt/kubernetes/bin#使用head -c 16 /dev/urandom | od -An -t x | tr -d ’ ',隨機(jī)生成序列號 生成隨機(jī)序列號 head -c 16 /dev/urandom | od -An -t x | tr -d ' '#創(chuàng)建token(令牌)文件 cd /opt/kubernetes/cfg vim token.csv 上一步隨機(jī)序列號,kubelet-bootstrap,10001,"system:kubelet-bootstrap" ------------------------------ 此角色的定位和作用如下: ① 創(chuàng)建位置:在master節(jié)點(diǎn)創(chuàng)建bootstrap角色 ② 管理node節(jié)點(diǎn)的kubelet ③ kubelet-bootstrap 管理、授權(quán)system:kubelet-bootstrap ④ 而system:kubelet-bootstrap 則管理node節(jié)點(diǎn)的kubelet ⑤ token就是授權(quán)給system:kubelet-bootstrap角色,如果此角色沒有token的授權(quán),則不能管理node下的kubelet ------------------------------#二進(jìn)制文件,token,證書準(zhǔn)備齊全后,開啟apiserver 上傳master.zip cd /root/k8s unzip master.zip chmod +x controller-manager.shapiserver.sh 腳本簡介------------------------------------- #!/bin/bashMASTER_ADDRESS=$1 #本地地址 ETCD_SERVERS=$2 #群集cat <<EOF >/opt/kubernetes/cfg/kube-apiserver #生成配置文件到k8s工作目錄KUBE_APISERVER_OPTS="--logtostderr=true \\ #從ETCD讀取、存入數(shù)據(jù) --v=4 \\ --etcd-servers=${ETCD_SERVERS} \\ --bind-address=${MASTER_ADDRESS} \\ #綁定地址 --secure-port=6443 \\ --advertise-address=${MASTER_ADDRESS} \\ #master本地地址 --allow-privileged=true \\ #允許授權(quán) --service-cluster-ip-range=10.0.0.0/24 \\ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\ #plugin插件,包括命名空間中的插件、server端的授權(quán) --authorization-mode=RBAC,Node \\ #使用RBAC模式驗(yàn)證node端 --kubelet-https=true \\ #允許對方使用https協(xié)議進(jìn)行訪問 --enable-bootstrap-token-auth \\ #開啟bootstrap令牌授權(quán) --token-auth-file=/opt/kubernetes/cfg/token.csv \\ #令牌文件路徑 --service-node-port-range=30000-50000 \\ #開啟的監(jiān)聽端口 #以下均為證書文件 --tls-cert-file=/opt/kubernetes/ssl/server.pem \\ --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\ --client-ca-file=/opt/kubernetes/ssl/ca.pem \\ --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\ --etcd-cafile=/opt/etcd/ssl/ca.pem \\ --etcd-certfile=/opt/etcd/ssl/server.pem \\ --etcd-keyfile=/opt/etcd/ssl/server-key.pem" EOFcat <<EOF >/usr/lib/systemd/system/kube-apiserver.service #服務(wù)啟動(dòng)腳本 [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes[Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS Restart=on-failure[Install] WantedBy=multi-user.target EOFsystemctl daemon-reload systemctl enable kube-apiserver systemctl restart kube-apiserver ---------------------------------------------------------#開啟apiserver bash apiserver.sh 192.168.184.140 https://192.168.184.140:2379,https://192.168.184.141:2379,https://192.168.184.142:2379#查看api進(jìn)程驗(yàn)證啟動(dòng)狀態(tài) ps aux | grep kube#查看配置文件是否正常 cat /opt/kubernetes/cfg/kube-apiserver#查看進(jìn)行端口是否開啟 netstat -natp | grep 6443#查看scheduler啟動(dòng)腳本 vim scheduler.sh #!/bin/bash MASTER_ADDRESS=$1 cat <<EOF >/opt/kubernetes/cfg/kube-scheduler KUBE_SCHEDULER_OPTS="--logtostderr=true \\ #定義日志記錄 --v=4 \\ --master=${MASTER_ADDRESS}:8080 \\ #定義master地址,指向8080端口 --leader-elect" #定位為leader EOF cat <<EOF >/usr/lib/systemd/system/kube-scheduler.service #定義啟動(dòng)腳本 [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable kube-scheduler systemctl restart kube-scheduler#啟動(dòng)scheduler服務(wù) ./scheduler.sh 127.0.0.1#查看進(jìn)程 ps aux | grep sch#查看服務(wù) systemctl status kube-scheduler.service#啟動(dòng)controller-manager服務(wù) ./controller-manager.sh #查看服務(wù) systemctl status kube-controller-manager.service #最后查看master節(jié)點(diǎn)狀態(tài) /opt/kubernetes/bin/kubectl get cs#把master節(jié)點(diǎn)的kubelet、kube-proxy拷貝到node節(jié)點(diǎn) cd kubernetes/server/bin/ scp kubelet kube-proxy root@192.168.184.141:/opt/kubernetes/bin/ scp kubelet kube-proxy root@192.168.184.142:/opt/kubernetes/bin/#進(jìn)行kube配置 cd ~/k8s mkdir kubeconfig cd kubeconfig (上傳kubeconfig.sh腳本) mv kubeconfig.sh kubeconfig vim kubeconfig BOOTSTRAP_TOKEN=b0bff184cbd37dae1351103ad3458685 cat > token.csv <<EOF ${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap" EOF APISERVER=$1 SSL_DIR=$2 # 創(chuàng)建kubelet bootstrapping kubeconfig export KUBE_APISERVER="https://$APISERVER:6443" # 設(shè)置集群參數(shù) kubectl config set-cluster kubernetes \--certificate-authority=$SSL_DIR/ca.pem \--embed-certs=true \--server=${KUBE_APISERVER} \--kubeconfig=bootstrap.kubeconfig # 設(shè)置客戶端認(rèn)證參數(shù) kubectl config set-credentials kubelet-bootstrap \--token=664f2017059d58e78f6cce2e47ef383b \ #僅修改此處令牌序列號,從/opt/kubernetes/cfg/token.csv中獲取--kubeconfig=bootstrap.kubeconfig # 設(shè)置上下文參數(shù) kubectl config set-context default \--cluster=kubernetes \--user=kubelet-bootstrap \--kubeconfig=bootstrap.kubeconfig # 設(shè)置默認(rèn)上下文 kubectl config use-context default --kubeconfig=bootstrap.kubeconfig # 創(chuàng)建kube-proxy kubeconfig文件 kubectl config set-cluster kubernetes \--certificate-authority=$SSL_DIR/ca.pem \--embed-certs=true \--server=${KUBE_APISERVER} \--kubeconfig=kube-proxy.kubeconfig kubectl config set-credentials kube-proxy \--client-certificate=$SSL_DIR/kube-proxy.pem \--client-key=$SSL_DIR/kube-proxy-key.pem \--embed-certs=true \--kubeconfig=kube-proxy.kubeconfig kubectl config set-context default \--cluster=kubernetes \--user=kube-proxy \--kubeconfig=kube-proxy.kubeconfig kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig#設(shè)置環(huán)境變量 export PATH=$PATH:/opt/kubernetes/bin/#使用kubectl命令 kubectl get cs#執(zhí)行kubeconfig腳本 bash kubeconfig 192.168.184.140 /root/k8s/k8s-cert/#拷貝生成的兩個(gè)配置文件拷貝到node節(jié)點(diǎn) scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.184.141:/opt/kubernetes/cfg/ scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.184.142:/opt/kubernetes/cfg/#創(chuàng)建bootstrap角色賦予權(quán)限用于連接apiserver請求簽名 (只有bootstrap授權(quán)之后,node節(jié)點(diǎn)才算完整的添加到群集、可以被master節(jié)點(diǎn)所管理) kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
2、node節(jié)點(diǎn)部署
①、node1節(jié)點(diǎn)
#上傳node壓縮包并解壓 unzip node.zip#執(zhí)行kubelet腳本,用于請求連接master主機(jī) bash kubelet.sh 192.168.184.141##查看kubelet進(jìn)程 ps aux | grep kubelet#查看服務(wù)狀態(tài) systemctl status kubelet.service#在master上檢查node1節(jié)點(diǎn)的請求(master上操作) kubectl get csr#頒發(fā)證書(master上操作) kubectl certificate approve 節(jié)點(diǎn)1的名稱#再次查看csr(master上操作) kubectl get csr#查看集群節(jié)點(diǎn)(master上操作) kubectl get node#在node1節(jié)點(diǎn)啟動(dòng)proxy代理服務(wù) bash proxy.sh 192.168.184.141 systemctl status kube-proxy.service
②、node2節(jié)點(diǎn)
#把node1節(jié)點(diǎn)的/opt/kubernetes 目錄復(fù)制到node2節(jié)點(diǎn)中(node1上操作) scp -r /opt/kubernetes/ root@192.168.184.142:/opt#拷貝啟動(dòng)腳本 (node1上操作) scp /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@192.168.184.142:/usr/lib/systemd/system/#刪除所有證書文件 cd /opt/kubernetes/ssl/ rm -rf *#修改kubelet配置文件IP地址 cd ../cfg vim kubelet KUBELET_OPTS="--logtostderr=true \ --v=4 \ --hostname-override=192.168.184.142 \ #修改為node2節(jié)點(diǎn)本地地址 --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \ --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \ --config=/opt/kubernetes/cfg/kubelet.config \ --cert-dir=/opt/kubernetes/ssl \ --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"#修改kubelet.conf配置文件 vim kubelet.config kind: KubeletConfiguration apiVersion: kubelet.config.k8s.io/v1beta1 address: 192.168.184.142 #修改為本地地址 port: 10250 readOnlyPort: 10255 cgroupDriver: cgroupfs clusterDNS: - 10.0.0.2 #DNS解析地址,需要記下來 clusterDomain: cluster.local. failSwapOn: false authentication:anonymous:enabled: true#修改kube-proxy 配置文件 vim kube-proxy KUBE_PROXY_OPTS="--logtostderr=true \ --v=4 \ --hostname-override=192.168.184.142 \ #修改為本地地址 --cluster-cidr=10.0.0.0/24 \ --proxy-mode=ipvs \ --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"#啟動(dòng)服務(wù) systemctl start kubelet systemctl enable kubelet systemctl start kube-proxy systemctl enable kube-proxy#master節(jié)點(diǎn)授權(quán)(master上操作) kubectl get csr kubectl certificate approve 節(jié)點(diǎn)2的名稱 kubectl get csr#master查看集群狀態(tài)(master上操作) kubectl get node六、基于上一步單master節(jié)點(diǎn)的雙master節(jié)點(diǎn)部署
1、搭建master2節(jié)點(diǎn)
#復(fù)制主要文件及目錄(master1上操作) scp -r /opt/kubernetes/ root@192.168.184.145:/opt scp /usr/lib/systemd/system/{kube-apiserver,kube-controller-manager,kube-scheduler}.service root@192.168.184.145:/usr/lib/systemd/system/#修改配置文件kube-apiserver中的IP地址(master2上操作) cd /opt/kubernetes/cfg vim kube-apiserver KUBE_APISERVER_OPTS="--logtostderr=true \ --v=4 \ --etcd-servers=https://192.168.184.140:2379,https://192.168.184.141:2379,https://192.168.184.142:2379 \ --bind-address=192.168.184.145 \ #綁定bind地址(本地IP) --secure-port=6443 \ --advertise-address=192.168.184.145 \ #修改對外展示地址 --allow-privileged=true \ --service-cluster-ip-range=10.0.0.0/24 \ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \ --authorization-mode=RBAC,Node \ --kubelet-https=true \ --enable-bootstrap-token-auth \ --token-auth-file=/opt/kubernetes/cfg/token.csv \ --service-node-port-range=30000-50000 \ --tls-cert-file=/opt/kubernetes/ssl/server.pem \ --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \ --client-ca-file=/opt/kubernetes/ssl/ca.pem \ --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \ --etcd-cafile=/opt/etcd/ssl/ca.pem \ --etcd-certfile=/opt/etcd/ssl/server.pem \ --etcd-keyfile=/opt/etcd/ssl/server-key.pem"#拷貝master01上已有的etcd證書給master2使用(master1上操作) scp -r /opt/etcd/ root@192.168.184.145:/opt/#啟動(dòng)api-server、api-scheduler、api-controller-manager服務(wù)(master2上操作) systemctl start kube-apiserver.service systemctl enable kube-apiserver.service systemctl status kube-apiserver.service systemctl start kube-controller-manager.service systemctl enable kube-controller-manager.service systemctl status kube-controller-manager.service systemctl start kube-scheduler.service systemctl enable kube-scheduler.service systemctl status kube-scheduler.service#添加環(huán)境變量(master2上操作) echo "export PATH=$PATH:/opt/kubernetes/bin/" >> /etc/profile source /etc/profile#查看集群節(jié)點(diǎn)信息 kubectl get node
2、nginx負(fù)載均衡部署
#關(guān)閉防火墻和核心防護(hù) systemctl stop firewalld systemctl disable firewalld setenforce 0 sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config#添加nginx官方y(tǒng)um源、安裝nginx vim /etc/yum.repos.d/nginx.repo [nginx] name=nginx repo baseurl=http://nginx.org/packages/centos/7/$basearch/ gpgcheck=0yum list yum install nginx -y#在nginx配置文件中添加四層轉(zhuǎn)發(fā)功能(event和http之間進(jìn)行插入) stream {log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';access_log /var/log/nginx/k8s-access.log main;upstream k8s-apiserver {server 192.168.184.140:6443; server 192.168.184.145:6443;}server {listen 6443;proxy_pass k8s-apiserver; }}#在兩個(gè)nginx節(jié)點(diǎn)部署keeplived服務(wù) ##nginx-master yum install keepalived -y#修改nginx-master的keepalived配置文件 vim /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs {notification_email {acassen@firewall.locfailover@firewall.locsysadmin@firewall.loc} notification_email_from Alexandre.Cassen@firewall.locsmtp_server 127.0.0.1smtp_connect_timeout 30router_id NGINX_MASTER } vrrp_script check_nginx { script "/etc/nginx/check_nginx.sh" } vrrp_instance VI_1 {state MASTERinterface ens33virtual_router_id 51priority 100 advert_int 1 authentication {auth_type PASSauth_pass 1111}virtual_ipaddress {192.168.184.200/24}track_script { check_nginx } }##nginx-slave vim /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { notification_email {acassen@firewall.locfailover@firewall.locsysadmin@firewall.loc} notification_email_from Alexandre.Cassen@firewall.locsmtp_server 127.0.0.1smtp_connect_timeout 30router_id NGINX_MASTER } vrrp_script check_nginx { script "/etc/nginx/check_nginx.sh" } vrrp_instance VI_1 {state BACKUPinterface ens33virtual_router_id 51priority 90 advert_int 1 authentication {auth_type PASSauth_pass 1111}virtual_ipaddress {192.168.184.200/24}track_script { check_nginx } }#在/etc/nginx/目錄下創(chuàng)建ngixn檢測腳本 vim /etc/nginx/check_nginx.sh count=$(ps -ef |grep nginx |egrep -cv "grep|$$") if [ "$count" -eq 0 ];thensystemctl stop keepalived fichmod +x /etc/nginx/check_nginx.sh#啟動(dòng)keepalived服務(wù) systemctl start keepalived.service#nginx-master上使用ip a查看是否有漂移地址#nginx-master上使用pkill nginx模擬故障#nginx-slave上使用ip a 查看漂移地址是否已經(jīng)漂移過來#nginx-master上先開啟nginx服務(wù),再啟動(dòng)keepalived服務(wù),進(jìn)行恢復(fù)
3、node節(jié)點(diǎn)配置
#修改兩個(gè)node節(jié)點(diǎn)配置文件統(tǒng)一指向VIP地址 vim /opt/kubernetes/cfg/bootstrap.kubeconfig server: https://192.168.184.200:6443 vim /opt/kubernetes/cfg/kubelet.kubeconfig server: https://192.168.184.200:6443 vim /opt/kubernetes/cfg/kube-proxy.kubeconfig server: https://192.168.184.200:6443#自檢 cd /opt/kubernetes/cfg cat *.kubeconfig | grep 200 *七、測試
1、master1上進(jìn)行操作
#創(chuàng)建pod kubectl run nginx --image=nginx#查看狀態(tài) kubectl get pods#查看創(chuàng)建的pod位置 kubectl get pods -o wide#在node1節(jié)點(diǎn)查看容器列表 docker ps -a
2、node1節(jié)點(diǎn)curl測試
#在node1節(jié)點(diǎn)訪問Pod IP地址 curl 172.17.9.2 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style>body {width: 35em;margin: 0 auto;font-family: Tahoma, Verdana, Arial, sans-serif;} </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p><p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p><p><em>Thank you for using nginx.</em></p> </body> </html>總結(jié)
以上是生活随笔為你收集整理的K8S——单master节点和基于单master节点的双master节点二进制部署(本机实验,防止卡顿,所以多master就不做3台了)的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 2400的内存:超强性能,超高容量,让你
- 下一篇: K8S——关于K8S控制台的yaml文件