日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

kubernetes使用ansible快速构建集群

發(fā)布時(shí)間:2025/3/19 编程问答 17 豆豆
生活随笔 收集整理的這篇文章主要介紹了 kubernetes使用ansible快速构建集群 小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

軟硬件限制:

1)cpu和內(nèi)存 master:至少1c2g,推薦2c4g;node:至少1c2g

2)linux系統(tǒng) 內(nèi)核版本至少3.10,推薦CentOS7/RHEL7

3)docker 至少1.9版本,推薦1.12+ 4)etcd 至少2.0版本,推薦3.0+

kubernetes官方github地址 https://github.com/kubernetes/kubernetes/releases

高可用集群所需節(jié)點(diǎn)規(guī)劃:

部署節(jié)點(diǎn)------x1 : 運(yùn)行這份 ansible 腳本的節(jié)點(diǎn) etcd節(jié)點(diǎn)------x3 : 注意etcd集群必須是1,3,5,7...奇數(shù)個(gè)節(jié)點(diǎn)

master節(jié)點(diǎn)----x2 : 根據(jù)實(shí)際集群規(guī)模可以增加節(jié)點(diǎn)數(shù),需要額外規(guī)劃一個(gè)master VIP(虛地址)

lb節(jié)點(diǎn)--------x2 : 負(fù)載均衡節(jié)點(diǎn)兩個(gè),安裝 haproxy+keepalived

node節(jié)點(diǎn)------x3 : 真正應(yīng)用負(fù)載的節(jié)點(diǎn),根據(jù)需要提升機(jī)器配置和增加節(jié)點(diǎn)數(shù)

機(jī)器規(guī)劃:
<table>
<tr>
<th>Ip</th>
<th>機(jī)名</th>
<th>角色</th>
<th>系統(tǒng)</th>
</tr>
<tr>
<th>192.168.2.10</th>
<th>master</th>
<th>deploy、master1、lb1、etcd</th>
<td rowspan="6">centos7.5 x86_64</td>
</tr>
<tr>
<th>192.168.2.11</th>
<th>node1</th>
<th>etcd、node</th>
</tr>
<tr>
<th>192.168.2.12</th>
<th>node2</th>
<th>etcd、node</th>
</tr>
<tr>
<th>192.168.2.13</th>
<th>node3</th>
<th>node</th>
</tr>
<tr>
<th>192.168.2.14</th>
<th>master2</th>
<th>master2、lb2</th>
</tr>
<tr>
<th>192.168.2.16</th>
<th></th>
<th>vip</th>
</tr>
</table>

準(zhǔn)備工作

安裝epel源、python

六臺(tái)機(jī)器,全部執(zhí)行:

yum install epel-releaseyum updateyum install python

deploy節(jié)點(diǎn)安裝和準(zhǔn)備ansible

yum install -y python-pip gitpip install pip --upgrade -i http://mirrors.aliyun.com/pypi/simple/ --trusted-host mirrors.aliyun.compip install --no-cache-dir ansible -i http://mirrors.aliyun.com/pypi/simple/ --trusted-host mirrors.aliyun.com

deploy節(jié)點(diǎn)配置免密碼登錄

奉上我使用多年的自動(dòng)布置key的腳本

#!/bin/bash keypath=/root/.ssh [ -d ${keypath} ] || mkdir -p ${keypath} rpm -q expect &> /dev/null || yum install expect -y ssh-keygen -t rsa -f /root/.ssh/id_rsa -P "" password=centos for host in `seq 10 14`;do expect <<EOF set timeout 5 spawn ssh-copy-id 192.168.2.$host expect { "yes/no" { send "yes\n";exp_continue } "password" { send "$password\n" } } expect eof EOF done

執(zhí)行腳本,deploy自動(dòng)copy key到目標(biāo)主機(jī)

[root@master ~]# sh sshkey.sh

deploy上編排k8s

git clone https://github.com/gjmzj/kubeasz.gitmkdir -p /etc/ansiblemv kubeasz/* /etc/ansible/

從百度云網(wǎng)盤下載二進(jìn)制文件 https://pan.baidu.com/s/1c4RFaA#list/path=%2F

可以根據(jù)自己所需版本,下載對(duì)應(yīng)的tar包,這里我下載1.12

經(jīng)過一番折騰, 終把k8s.1-12-1.tar.gz的tar包放到了depoly上

tar zxvf k8s.1-12-1.tar.gz mv bin/* /etc/ansible/bin/

Example:

[root@master ~]# rz rz waiting to receive. Starting zmodem transfer. Press Ctrl+C to cancel. Transferring k8s.1-12-1.tar.gz...100% 234969 KB 58742 KB/sec 00:00:04 0 Errors [root@master ~]# ls anaconda-ks.cfg ifcfg-ens192.bak k8s.1-12-1.tar.gz kubeasz [root@master ~]# tar zxf k8s.1-12-1.tar.gz [root@master ~]# ls anaconda-ks.cfg bin ifcfg-ens192.bak k8s.1-12-1.tar.gz kubeasz [root@master ~]# mv bin /etc/ansible/mv:是否覆蓋"/etc/ansible/bin/readme.md"? y

配置集群參數(shù)

cd /etc/ansible/cp example/hosts.m-masters.example hosts //內(nèi)容根據(jù)實(shí)際情況修改 [deploy] 192.168.2.10 NTP_ENABLED=no# 'etcd' cluster must have odd member(s) (1,3,5,...) # variable 'NODE_NAME' is the distinct name of a member in 'etcd' cluster [etcd] 192.168.2.10 NODE_NAME=etcd1 192.168.2.11 NODE_NAME=etcd2 192.168.2.12 NODE_NAME=etcd3[kube-master] 192.168.2.10 # 'loadbalance' node, with 'haproxy+keepalived' installed [lb] 192.168.2.10 LB_IF="eth0" LB_ROLE=backup # replace 'etho' with node's network interface 192.168.2.14 LB_IF="eth0" LB_ROLE=master[kube-node] 192.168.2.11 192.168.2.12 192.168.2.13[vip] 192.168.2.15

修改完hosts,測試

ansible all -m ping[root@master ansible]# ansible all -m ping 192.168.2.11 | SUCCESS => {"changed": false, "ping": "pong" } 192.168.2.14 | SUCCESS => {"changed": false, "ping": "pong" } 192.168.2.12 | SUCCESS => {"changed": false, "ping": "pong" } 192.168.2.10 | SUCCESS => {"changed": false, "ping": "pong" } 192.168.2.13 | SUCCESS => {"changed": false, "ping": "pong" } 192.168.2.15 | SUCCESS => {"changed": false, "ping": "pong" }

分步驟安裝:

1)創(chuàng)建證書和安裝準(zhǔn)備

ansible-playbook 01.prepare.yml

2)安裝etcd集群

ansible-playbook 02.etcd.yml

檢查etcd節(jié)點(diǎn)健康狀況:
執(zhí)行bash

for ip in 10 11 12 ; do ETCDCTL_API=3 etcdctl --endpoints=https://192.168.2.$ip:2379 --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem endpoint healt; done

執(zhí)行后:

https://192.168.2.10:2379 is healthy: successfully committed proposal: took = 857.393μs
https://192.168.2.11:2379 is healthy: successfully committed proposal: took = 1.0619ms
https://192.168.2.12:2379 is healthy: successfully committed proposal: took = 1.19245ms

或者 添加/etc/ansible/bin環(huán)境變量:

[root@master ansible]# vim /etc/profile.d/k8s.sh export PATH=$PATH:/etc/ansible/bin[root@master ansible]# for ip in 10 11 12 ; do ETCDCTL_API=3 etcdctl --endpoints=https://192.168.2.$ip:2379 --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem endpoint healt; donehttps://192.168.2.10:2379 is healthy: successfully committed proposal: took = 861.891μs https://192.168.2.11:2379 is healthy: successfully committed proposal: took = 1.061687ms https://192.168.2.12:2379 is healthy: successfully committed proposal: took = 909.274μs

3)安裝docker

ansible-playbook 03.docker.yml

4)安裝master節(jié)點(diǎn)

ansible-playbook 04.kube-master.yml kubectl get componentstatus//查看集群狀態(tài)NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health":"true"} etcd-1 Healthy {"health":"true"} etcd-2 Healthy {"health":"true"}

5)安裝node節(jié)點(diǎn)

ansible-playbook 05.kube-node.yml

查看node節(jié)點(diǎn)

kubectl get nodes [root@master ansible]# kubectl get nodes NAME STATUS ROLES AGE VERSION 192.168.2.10 Ready,SchedulingDisabled master 112s v1.12.1 192.168.2.11 Ready node 17s v1.12.1 192.168.2.12 Ready node 17s v1.12.1 192.168.2.13 Ready node 17s v1.12.1 192.168.2.14 Ready,SchedulingDisabled master 112s v1.12.1

6)部署集群網(wǎng)絡(luò)

ansible-playbook 06.network.yml kubectl get pod -n kube-system //查看kube-systemnamespace上的pod,從中可以看到flannel相關(guān)的pod[root@master ansible]# kubectl get pod -n kube-system NAME READY STATUS RESTARTS AGE kube-flannel-ds-5d574 1/1 Running 0 47s kube-flannel-ds-6kpnm 1/1 Running 0 47s kube-flannel-ds-f2nfs 1/1 Running 0 47s kube-flannel-ds-gmbmv 1/1 Running 0 47s kube-flannel-ds-w5st7 1/1 Running 0 47s

7)安裝集群插件(dns, dashboard)

ansible-playbook 07.cluster-addon.yml

查看kube-system namespace下的服務(wù)

kubectl get svc -n kube-system [root@master ~]# kubectl get svc -n kube-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kube-dns ClusterIP 10.68.0.2 <none> 53/UDP,53/TCP 10h kubernetes-dashboard NodePort 10.68.119.108 <none> 443:35065/TCP 10h metrics-server ClusterIP 10.68.235.9 <none> 443/TCP 10h

查看admin登錄dashboard的 token

到此為止,分步部署已經(jīng)基本配置完畢了,下面就可以查找登錄token登錄dashboard了:

[root@master ~]# kubectl get secret -n kube-system|grep admin admin-user-token-4zdgw kubernetes.io/service-account-token 3 9h [root@master ~]# kubectl describe secret admin-user-token-4zdgw -n kube-system Name: admin-user-token-4zdgw Namespace: kube-system Labels: <none> Annotations: kubernetes.io/service-account.name: admin-userkubernetes.io/service-account.uid: 72378c78-ee7d-11e8-a2a7-000c2931fb97Type: kubernetes.io/service-account-tokenData ==== ca.crt: 1346 bytes namespace: 11 bytes token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTR6ZGd3Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI3MjM3OGM3OC1lZTdkLTExZTgtYTJhNy0wMDBjMjkzMWZiOTciLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.J0MjCSAP00RDvQgG1xBPvAYVo1oycXfoBh0dqdCzX1ByILyCHUqqxixuQdfE-pZqP15u6UV8OF3lGI_mHs5DBvNK0pCfaRICSo4SXSihJHKl_j9Bbozq9PjQ5d7CqHOFoXk04q0mWpJ5o0rJ6JX6Psx93Ch0uaXPPMLtzL0kolIF0j1tCFnsob8moczH06hfzo3sg8h0YCXyO6Z10VT7GMuLlwiG8XgWcplm-vcPoY_AWHnLV3RwAJH0u1q0IrMprvgTCuHighTaSjPeUe2VsXMhDpocJMoHQOoHirQKmiIAnanbIm4N1TO_5R1cqh-_gH7-MH8xefgWXoSrO-fo2w

登錄了賬號(hào)密碼后,用上面token在界面上登錄即可

也可以查詢證.

[root@master ~]# kubectl get secret -n kube-system NAME TYPE DATA AGE admin-user-token-4zdgw kubernetes.io/service-account-token 3 10h coredns-token-98zvm kubernetes.io/service-account-token 3 10h default-token-zk5rj kubernetes.io/service-account-token 3 10h flannel-token-4gmtz kubernetes.io/service-account-token 3 10h kubernetes-dashboard-certs Opaque 0 10h kubernetes-dashboard-key-holder Opaque 2 10h kubernetes-dashboard-token-lcsd6 kubernetes.io/service-account-token 3 10h metrics-server-token-j4s2c kubernetes.io/service-account-token 3 10h [root@master ~]# kubectl get secret/admin-user-token-4zdgw -n kube-system -o yaml apiVersion: v1 data:ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR0akNDQXA2Z0F3SUJBZ0lVVFB3YVdFR0gyT2kwaHlVeGlJWnhFSUF3UFpVd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RVRBUEJnTlZCQWdUQ0VoaGJtZGFhRzkxTVFzd0NRWURWUVFIRXdKWQpVekVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WlRlWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SGhjTk1UZ3hNVEl5TVRZeE1EQXdXaGNOTXpNeE1URTRNVFl4TURBd1dqQmhNUXN3Q1FZRFZRUUcKRXdKRFRqRVJNQThHQTFVRUNCTUlTR0Z1WjFwb2IzVXhDekFKQmdOVkJBY1RBbGhUTVF3d0NnWURWUVFLRXdOcgpPSE14RHpBTkJnTlZCQXNUQmxONWMzUmxiVEVUTUJFR0ExVUVBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKCktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQU5zV1NweVVRcGYvWDFCaHNtUS9NUDVHVE0zcUFjWngKV3lKUjB0VEtyUDVWNStnSjNZWldjK01HSzlrY3h6OG1RUUczdldvNi9ENHIyZ3RuREVWaWxRb1dlTm0rR3hLSwpJNjkzczNlS2ovM1dIdGloOVA4TWp0RktMWnRvSzRKS09kUURYeGFHLzJNdzJEMmZnbzNJT2VDdlZzR0F3Qlc4ClYxMDh3dUVNdTIzMnhybFdSSFFWaTNyc0dmN3pJbkZzSFNOWFFDbXRMMHhubERlYnZjK2c2TWRtcWZraVZSdzIKNTFzZGxnbmV1aEFqVFJaRkYvT0lFWE4yUjIyYTJqZVZDbWNySEcvK2orU0tzTlpmeVVCb216NGRUcmRsV0JEUQpob3ZzSGkrTEtJVGNxZHBQV3MrZmxIQjlaL1FRUnM5MTZEREpxMHRWNFV6MEY0YjRsemJXaGdrQ0F3RUFBYU5tCk1HUXdEZ1lEVlIwUEFRSC9CQVFEQWdFR01CSUdBMVVkRXdFQi93UUlNQVlCQWY4Q0FRSXdIUVlEVlIwT0JCWUUKRklaN3NZczRjV0xtYnlwVUEwWUhGanc3Mk5jV01COEdBMVVkSXdRWU1CYUFGSVo3c1lzNGNXTG1ieXBVQTBZSApGanc3Mk5jV01BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQ2Eyell1NmVqMlRURWcyN1VOeGh4U0ZMaFJLTHhSClg5WnoyTmtuVjFQMXhMcU8xSHRUSmFYajgvL0wxUmdkQlRpR3JEOXNENGxCdFRRMmF2djJFQzZKeXJyS0xVelUKSWNiUXNpU0h4NkQ3S1FFWjFxQnZkNWRKVDluai9mMG94SjlxNDVmZTBJbWNiUndKWnA2WDJKbWtQSWZyYjYreQo2YUFTbzhaakliTktQN1Z1WndIQ1RPQUwzeUhVR2lJTEJtT1hKNldGRDlpTWVFMytPZE95ZHIwYzNvUmRXVW1aCkI1andlN2x2MEtVc2Y1SnBTS0JCbzZ3bkViNXhMdDRSYjBMa2RxMXZLTGFOMXUvbXFFc1ltbUk3MmRuaUdLSTkKakdDdkRqNVREaW55T1RQU005Vi81RE5OTFlLQkExaDRDTmVBRjE1RWlCay9EU055SzIrUTF3TVgKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=namespace: a3ViZS1zeXN0ZW0=token: ZXlKaGJHY2lPaUpTVXpJMU5pSXNJbXRwWkNJNklpSjkuZXlKcGMzTWlPaUpyZFdKbGNtNWxkR1Z6TDNObGNuWnBZMlZoWTJOdmRXNTBJaXdpYTNWaVpYSnVaWFJsY3k1cGJ5OXpaWEoyYVdObFlXTmpiM1Z1ZEM5dVlXMWxjM0JoWTJVaU9pSnJkV0psTFhONWMzUmxiU0lzSW10MVltVnlibVYwWlhNdWFXOHZjMlZ5ZG1salpXRmpZMjkxYm5RdmMyVmpjbVYwTG01aGJXVWlPaUpoWkcxcGJpMTFjMlZ5TFhSdmEyVnVMVFI2WkdkM0lpd2lhM1ZpWlhKdVpYUmxjeTVwYnk5elpYSjJhV05sWVdOamIzVnVkQzl6WlhKMmFXTmxMV0ZqWTI5MWJuUXVibUZ0WlNJNkltRmtiV2x1TFhWelpYSWlMQ0pyZFdKbGNtNWxkR1Z6TG1sdkwzTmxjblpwWTJWaFkyTnZkVzUwTDNObGNuWnBZMlV0WVdOamIzVnVkQzUxYVdRaU9pSTNNak0zT0dNM09DMWxaVGRrTFRFeFpUZ3RZVEpoTnkwd01EQmpNamt6TVdaaU9UY2lMQ0p6ZFdJaU9pSnplWE4wWlcwNmMyVnlkbWxqWldGalkyOTFiblE2YTNWaVpTMXplWE4wWlcwNllXUnRhVzR0ZFhObGNpSjkuSjBNakNTQVAwMFJEdlFnRzF4QlB2QVlWbzFveWNYZm9CaDBkcWRDelgxQnlJTHlDSFVxcXhpeHVRZGZFLXBacVAxNXU2VVY4T0YzbEdJX21IczVEQnZOSzBwQ2ZhUklDU280U1hTaWhKSEtsX2o5QmJvenE5UGpRNWQ3Q3FIT0ZvWGswNHEwbVdwSjVvMHJKNkpYNlBzeDkzQ2gwdWFYUFBNTHR6TDBrb2xJRjBqMXRDRm5zb2I4bW9jekgwNmhmem8zc2c4aDBZQ1h5TzZaMTBWVDdHTXVMbHdpRzhYZ1djcGxtLXZjUG9ZX0FXSG5MVjNSd0FKSDB1MXEwSXJNcHJ2Z1RDdUhpZ2hUYVNqUGVVZTJWc1hNaERwb2NKTW9IUU9vSGlyUUttaUlBbmFuYkltNE4xVE9fNVIxY3FoLV9nSDctTUg4eGVmZ1dYb1NyTy1mbzJ3 kind: Secret metadata:annotations:kubernetes.io/service-account.name: admin-userkubernetes.io/service-account.uid: 72378c78-ee7d-11e8-a2a7-000c2931fb97creationTimestamp: 2018-11-22T17:38:38Zname: admin-user-token-4zdgwnamespace: kube-systemresourceVersion: "977"selfLink: /api/v1/namespaces/kube-system/secrets/admin-user-token-4zdgwuid: 7239bb01-ee7d-11e8-8c5c-000c29fd1c0f type: kubernetes.io/service-account-token

查看ServiceAccount

ServiceAccount 是一種賬號(hào),但是不是為集群用戶(管理員、運(yùn)維人員等)使用的,而是給運(yùn)行在集群中的 Pod 里面的進(jìn)程使用的。

[root@master ~]# kubectl get serviceaccount --all-namespaces NAMESPACE NAME SECRETS AGE default default 1 10h kube-public default 1 10h kube-system admin-user 1 10h kube-system coredns 1 10h kube-system default 1 10h kube-system flannel 1 10h kube-system kubernetes-dashboard 1 10h kube-system metrics-server 1 10h [root@master ~]# kubectl describe serviceaccount/default -n kube-system Name: default Namespace: kube-system Labels: <none> Annotations: <none> Image pull secrets: <none> Mountable secrets: default-token-zk5rj Tokens: default-token-zk5rj Events: <none> [root@master ~]# kubectl get secret/default-token-zk5rj -n kube-system NAME TYPE DATA AGE default-token-zk5rj kubernetes.io/service-account-token 3 10h [root@master ~]# kubectl get secret/default-token-zk5rj -n kube-system -o yaml apiVersion: v1 data:ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR0akNDQXA2Z0F3SUJBZ0lVVFB3YVdFR0gyT2kwaHlVeGlJWnhFSUF3UFpVd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RVRBUEJnTlZCQWdUQ0VoaGJtZGFhRzkxTVFzd0NRWURWUVFIRXdKWQpVekVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WlRlWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SGhjTk1UZ3hNVEl5TVRZeE1EQXdXaGNOTXpNeE1URTRNVFl4TURBd1dqQmhNUXN3Q1FZRFZRUUcKRXdKRFRqRVJNQThHQTFVRUNCTUlTR0Z1WjFwb2IzVXhDekFKQmdOVkJBY1RBbGhUTVF3d0NnWURWUVFLRXdOcgpPSE14RHpBTkJnTlZCQXNUQmxONWMzUmxiVEVUTUJFR0ExVUVBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKCktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQU5zV1NweVVRcGYvWDFCaHNtUS9NUDVHVE0zcUFjWngKV3lKUjB0VEtyUDVWNStnSjNZWldjK01HSzlrY3h6OG1RUUczdldvNi9ENHIyZ3RuREVWaWxRb1dlTm0rR3hLSwpJNjkzczNlS2ovM1dIdGloOVA4TWp0RktMWnRvSzRKS09kUURYeGFHLzJNdzJEMmZnbzNJT2VDdlZzR0F3Qlc4ClYxMDh3dUVNdTIzMnhybFdSSFFWaTNyc0dmN3pJbkZzSFNOWFFDbXRMMHhubERlYnZjK2c2TWRtcWZraVZSdzIKNTFzZGxnbmV1aEFqVFJaRkYvT0lFWE4yUjIyYTJqZVZDbWNySEcvK2orU0tzTlpmeVVCb216NGRUcmRsV0JEUQpob3ZzSGkrTEtJVGNxZHBQV3MrZmxIQjlaL1FRUnM5MTZEREpxMHRWNFV6MEY0YjRsemJXaGdrQ0F3RUFBYU5tCk1HUXdEZ1lEVlIwUEFRSC9CQVFEQWdFR01CSUdBMVVkRXdFQi93UUlNQVlCQWY4Q0FRSXdIUVlEVlIwT0JCWUUKRklaN3NZczRjV0xtYnlwVUEwWUhGanc3Mk5jV01COEdBMVVkSXdRWU1CYUFGSVo3c1lzNGNXTG1ieXBVQTBZSApGanc3Mk5jV01BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQ2Eyell1NmVqMlRURWcyN1VOeGh4U0ZMaFJLTHhSClg5WnoyTmtuVjFQMXhMcU8xSHRUSmFYajgvL0wxUmdkQlRpR3JEOXNENGxCdFRRMmF2djJFQzZKeXJyS0xVelUKSWNiUXNpU0h4NkQ3S1FFWjFxQnZkNWRKVDluai9mMG94SjlxNDVmZTBJbWNiUndKWnA2WDJKbWtQSWZyYjYreQo2YUFTbzhaakliTktQN1Z1WndIQ1RPQUwzeUhVR2lJTEJtT1hKNldGRDlpTWVFMytPZE95ZHIwYzNvUmRXVW1aCkI1andlN2x2MEtVc2Y1SnBTS0JCbzZ3bkViNXhMdDRSYjBMa2RxMXZLTGFOMXUvbXFFc1ltbUk3MmRuaUdLSTkKakdDdkRqNVREaW55T1RQU005Vi81RE5OTFlLQkExaDRDTmVBRjE1RWlCay9EU055SzIrUTF3TVgKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=namespace: a3ViZS1zeXN0ZW0=token: ZXlKaGJHY2lPaUpTVXpJMU5pSXNJbXRwWkNJNklpSjkuZXlKcGMzTWlPaUpyZFdKbGNtNWxkR1Z6TDNObGNuWnBZMlZoWTJOdmRXNTBJaXdpYTNWaVpYSnVaWFJsY3k1cGJ5OXpaWEoyYVdObFlXTmpiM1Z1ZEM5dVlXMWxjM0JoWTJVaU9pSnJkV0psTFhONWMzUmxiU0lzSW10MVltVnlibVYwWlhNdWFXOHZjMlZ5ZG1salpXRmpZMjkxYm5RdmMyVmpjbVYwTG01aGJXVWlPaUprWldaaGRXeDBMWFJ2YTJWdUxYcHJOWEpxSWl3aWEzVmlaWEp1WlhSbGN5NXBieTl6WlhKMmFXTmxZV05qYjNWdWRDOXpaWEoyYVdObExXRmpZMjkxYm5RdWJtRnRaU0k2SW1SbFptRjFiSFFpTENKcmRXSmxjbTVsZEdWekxtbHZMM05sY25acFkyVmhZMk52ZFc1MEwzTmxjblpwWTJVdFlXTmpiM1Z1ZEM1MWFXUWlPaUpoTkRKaE9EUmlaQzFsWlRkakxURXhaVGd0T0dNMVl5MHdNREJqTWpsbVpERmpNR1lpTENKemRXSWlPaUp6ZVhOMFpXMDZjMlZ5ZG1salpXRmpZMjkxYm5RNmEzVmlaUzF6ZVhOMFpXMDZaR1ZtWVhWc2RDSjkuSTBqQnNkVk1udUw1Q2J2VEtTTGtvcFFyd1h4NTlPNWt0YnVJUHVaemVNTjJjdmNvTE9icS1Xa0NRWWVaaDEwdUFsWVBUbnAtTkxLTFhLMUlrQVpab3dzcllKVmJsQmdQVmVOUDhtOWJ4dk5HXzlMVjcyNGNOaU1aT2pfQ0ExREJEVF91eHlXWlF0eUEwZ0RpeTBRem1zMnZrVEpaZFNHQUZ6V2NVdjA1QWlsdUxaUUhLZmMyOWpuVGJERUhxT2U1UXU2cjRXd05qLTA0SE5qUzFpMHpzUGFkbmR0bzVSaUgtcThaSTVVT3hsNGYyUXlTMlJrWmdtV0tEM2tRaVBWUHpLZDRqRmJsLWhHN3VhQjdBSUVwcHBaUzVYby1USEFhRjJTSi1SUUJfenhDTG42QUZhU0EwcVhrYWhGYmpET0s0OTlZRTVlblJrNkpIRmZVWnR0YmlB kind: Secret metadata:annotations:kubernetes.io/service-account.name: defaultkubernetes.io/service-account.uid: a42a84bd-ee7c-11e8-8c5c-000c29fd1c0fcreationTimestamp: 2018-11-22T17:32:53Zname: default-token-zk5rjnamespace: kube-systemresourceVersion: "175"selfLink: /api/v1/namespaces/kube-system/secrets/default-token-zk5rjuid: a42daa94-ee7c-11e8-8c5c-000c29fd1c0f type: kubernetes.io/service-account-token

一鍵全自動(dòng)安裝

合并所有步驟的安裝,和分步安裝一樣的效果:

ansible-playbook 90.setup.yml

查看集群信息:

kubectl cluster-info[root@master ~]# kubectl cluster-info Kubernetes master is running at https://192.168.2.16:8443 CoreDNS is running at https://192.168.2.16:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy kubernetes-dashboard is running at https://192.168.2.16:8443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy

查看node/pod使用資源情況:

kubectl top nodekubectl top pod --all-namespaces

測試DNS

a) 創(chuàng)建nginx service

kubectl run nginx --image=nginx --expose --port=80

b)創(chuàng)建alpine 測試pod

kubectl run b1 -it --rm --image=alpine /bin/sh //進(jìn)入到alpine內(nèi)部nslookup nginx.default.svc.cluster.local //結(jié)果如下Address 1: 10.68.167.102 nginx.default.svc.cluster.local

增加node節(jié)點(diǎn)

1)deploy節(jié)點(diǎn)免密碼登錄node

ssh-copy-id 新node ip

2)修改/etc/ansible/hosts

[new-node] 192.168.2.15

3)執(zhí)行安裝腳本

ansible-playbook /etc/ansible/20.addnode.yml

4)驗(yàn)證

kubectl get nodekubectl get pod -n kube-system -o wide

5)后續(xù)工作

修改/etc/ansible/hosts,將new-node里面的所有ip全部移動(dòng)到kube-node組里去

增加master節(jié)點(diǎn)(略)

https://github.com/gjmzj/kubeasz/blob/master/docs/op/AddMaster.md 升級(jí)集群

1)備份etcd

ETCDCTL_API=3 etcdctl snapshot save backup.db

查看備份文件信息

ETCDCTL_API=3 etcdctl --write-out=table snapshot status backup.db

2

)到本項(xiàng)目的根目錄kubeasz

cd /dir/to/kubeasz

拉取最新的代碼

git pull origin master

3)下載升級(jí)目標(biāo)版本的kubernetes二進(jìn)制包(百度網(wǎng)盤https://pan.baidu.com/s/1c4RFaA#list/path=%2F)解壓,并替換/etc/ansible/bin/下的二進(jìn)制文件

4)docker升級(jí)(略),除非特別需要,否則不建議頻繁升級(jí)docker

5)如果接受業(yè)務(wù)中斷,執(zhí)行:

ansible-playbook -t upgrade_k8s,restart_dockerd 22.upgrade.yml

6)不能接受短暫中斷,需要這樣做:

a)

ansible-playbook -t upgrade_k8s 22.upgrade.yml

b)到所有node上逐一:

kubectl cordon和kubectl drain //遷移業(yè)務(wù)podsystemctl restart dockerkubectl uncordon //恢復(fù)pod

備份和恢復(fù)

1)備份恢復(fù)原理:

備份,從運(yùn)行的etcd集群中備份數(shù)據(jù)到磁盤文件恢復(fù),把etcd的備份文件恢復(fù)到etcd集群中,然后據(jù)此重建整個(gè)集群

2)如果使用kubeasz項(xiàng)目創(chuàng)建的集群,除了備份etcd數(shù)據(jù)外,還需要備份CA證書文件,以及ansible的hosts文件

3)手動(dòng)操作步驟:

mkdir -p ~/backup/k8s //創(chuàng)建備份目錄ETCDCTL_API=3 etcdctl snapshot save ~/backup/k8s/snapshot.db //備份etcd數(shù)據(jù) cp /etc/kubernetes/ssl/ca* ~/backup/k8s/ //備份ca證書

deploy節(jié)點(diǎn)執(zhí)行

ansible-playbook /etc/ansible/99.clean.yml //模擬集群崩

潰恢復(fù)步驟如下(在deploy節(jié)點(diǎn)):

a)恢復(fù)ca證書

mkdir -p /etc/kubernetes/ssl /backup/k8s cp ~/backup/k8s/* /backup/k8s/ cp /backup/k8s/ca* /etc/kubernetes/ssl/

b)重建集群
只需執(zhí)行前5步,其他的在etcd保存著。

cd /etc/ansibl ansible-playbook 01.prepare.ymlansible-playbook 02.etcd.ymlansible-playbook 03.docker.ymlansible-playbook 04.kube-master.ymlansible-playbook 05.kube-node.yml

c)恢復(fù)etcd數(shù)據(jù)

停止服務(wù)

ansible etcd -m service -a 'name=etcd state=stopped'

清空文件

ansible etcd -m file -a 'name=/var/lib/etcd/member/ state=absent'

登錄所有的etcd節(jié)點(diǎn),參照本etcd節(jié)點(diǎn)/etc/systemd/system/etcd.service的服務(wù)文件,替換如下{{}}中變量后執(zhí)行

cd /backup/k8s/ ETCDCTL_API=3 etcdctl snapshot restore snapshot.db \ -name etcd1 \ -initialcluster etcd1=https://192.168.2.10:2380,etcd2=https://192.168.2.11:2380,etcd3=https://192.168.2.12:2380 \ -initial-cluster-token etcd-cluster-0 \ --initial-advertise-peer-urls https://192.168.2.10:2380

執(zhí)行上面的步驟后,會(huì)生成{{ NODE_NAME }}.etcd目錄

cp -r etcd1.etcd/member /var/lib/etcd/systemctl restart etcd

d)在deploy節(jié)點(diǎn)重建網(wǎng)絡(luò)

ansible-playbook /etc/ansible/tools/change_k8s_network.yml

4)不想手動(dòng)恢復(fù),可以用ansible自動(dòng)恢復(fù)

需要一鍵備份

ansible-playbook /etc/ansible/23.backup.yml

檢查/etc/ansible/roles/cluster-backup/files目錄下是否有文件

tree /etc/ansible/roles/cluster-backup/files/ //如下

├── ca #集群CA相關(guān)備份 | ├── ca-config.json | ├── ca.csr | ├── ca-csr.json | ├── ca-key.pem | └── ca.pem ├── hosts # ansible hosts備份 | ├── hosts #最近的備份 | └── hosts-201807231642 |── readme.md └── snapshot # etcd數(shù)據(jù)備份├── snapshot-201807231642.db└── snapshot.db #最近的備份

模擬故障:

ansible-playbook /etc/ansible/99.clean.yml

修改文件/etc/ansible/roles/cluster-restore/defaults/main.yml,指定要恢復(fù)的etcd快照備份,如果不修改就是 新的一次

恢復(fù)操作:

ansible-playbook /etc/ansible/24.restore.ymlansible-playbook /etc/ansible/tools/change_k8s_network.yml

可選

對(duì)集群所有節(jié)點(diǎn)進(jìn)行操作系統(tǒng)層面的安全加固

ansible-playbook roles/os-harden/os-harden.yml,

詳情請(qǐng)參考o(jì)s-harden項(xiàng)目

考文檔:

本文檔參考 https://github.com/gjmzj/kubeasz 擴(kuò)展:使用kubeadm部署集群 https://blog.frognew.com/2018/08/kubeadm-install-kubernetes-1.11.html

轉(zhuǎn)載于:https://blog.51cto.com/m51cto/2344825

總結(jié)

以上是生活随笔為你收集整理的kubernetes使用ansible快速构建集群的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。

主站蜘蛛池模板: 一区二区视频观看 | 久久久久99精品成人片 | 日日夜夜狠狠操 | 久久久久a | 在线亚洲人成电影网站色www | 香蕉视频亚洲 | 一区二区国产在线 | 久久久久亚洲国产 | 波多野结衣一级 | 污污网站免费 | 五月天激情综合 | 日韩一区二区视频在线播放 | 羞羞羞网站 | 精品成人av一区二区三区 | 日韩毛片基地 | 熟妇五十路六十路息与子 | 亚洲av成人精品一区二区三区 | 亚洲黄色片在线观看 | 国产免费一区二区三区三州老师 | 日本天堂网在线观看 | 国产又粗又硬 | 蜜桃视频在线观看污 | 国产免费内射又粗又爽密桃视频 | 91在线免费视频观看 | 动漫玉足吸乳羞免费网站玉足 | 亚洲高清无码久久 | 亚洲码在线观看 | 粉嫩aⅴ一区二区三区 | 日韩精品在线观看视频 | 在线黄色免费 | 国产尤物网站 | 亚洲激情图片 | 国产精品视频免费网站 | 国产精品偷伦视频免费观看了 | 一区二区三区四区中文字幕 | 一区不卡在线观看 | 操极品美女 | av操操操| 黄色在线观看网站 | 日韩成人av毛片 | 人妻体内射精一区二区三区 | 日日夜夜狠狠操 | 少女视频的播放方法 | 高潮毛片无遮挡 | 亚洲国产精品一 | 91片看| 国产精品av在线 | 国产极品999| 国产精品黄色大片 | 免费一级欧美片在线播放 | 韩日成人 | 精品三级av | www.日本高清 | 老妇荒淫牲艳史 | 精品少妇人妻一区二区黑料社区 | 欧美色图3p| 在线看片成人 | 99国产成人精品 | 日本韩国欧美一区二区三区 | 亚洲精品中文字幕成人片 | 国精产品一区一区三区在线 | 国产精品久久久久久久免费大片 | av图片在线观看 | 成人av影院 | 国产激情一区二区三区视频免樱桃 | 天天干天天爱天天射 | 久久久精品91 | 伊人三级 | 日本作爱视频 | 涩涩网站在线看 | 裸体一区二区三区 | 午夜影院免费体验区 | 国产91精品一区二区绿帽 | 伊人婷婷在线 | 亚洲免费一| 成人免费看片 | 国产综合久久久久 | 不卡中文字幕在线 | 第一av | 欧美精品一卡二卡 | 欧美韩国日本一区 | 国产jizz18女人高潮 | 精品国产乱码久久久久久蜜臀 | 欧美亚洲国产精品 | 欧美夜夜 | 精品伦精品一区二区三区视频密桃 | 久久亚洲天堂 | 色久av | 免费看a| 久久久性色精品国产免费观看 | 欧美日韩六区 | 激情999 | 91大尺度 | 99爱精品 | 国产又粗又猛又黄又爽视频 | 少妇熟女高潮流白浆 | 亚洲av无码国产精品久久久久 | 男朋友是消防员第一季 | 日本毛片在线看 |