RKE部署K8S
1 準備工作
1.1 集群配置
一臺nignxf負載均衡,2臺使用RKE配置k8s集群,然后在這個集群上配置高可用Rancher
| 1.117.61.155 | nginx | controlplane,etcd |
| 81.68.101.212 | k8s-node01 | controlplane,etcd,worker |
| 81.68.229.215 | k8s-node02 | controlplane,etcd,worker |
分開買的騰訊云服務器,所以ip地址不連續
1.2 修改主機名 ( 所有節點操作 )
master:
hostnamectl set-hostname nginxnode1:
hostnamectl set-hostname k8s-node01node2:
hostnamectl set-hostname k8s-node02如果命令失敗, 直接修改 /etc/hostname, 重啟后生效
hostname # 可查看是否修改成功
1.3 安裝一些常用工具
yum -y install lrzsz vim gcc glibc openssl openssl-devel net-tools wget curl1.4 關閉防火強 ( 所有節點操作 )
systemctl stop firewalld systemctl disable firewalld所有節點相同操作可使用同時發送到所有會話
1.5 關閉selinux ( 所有節點操作 )
setenforce 0 # 臨時關閉 sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config # 永久關閉1.6 關閉swap ( 所有節點操作 )
swapoff -a # 臨時關閉;關閉swap主要是為了性能考慮 sed -ri 's/.*swap.*/#&/' /etc/fstabfree # 可以通過這個命令查看swap是否關閉了
1.7 同步時間 ( 所有節點操作 )
這樣可以防止在客戶端和服務器之間因為時鐘不同步而發生證書驗證錯誤。
yum install ntpdate -y ntpdate time.windows.com1.8 Kernel性能調優 ( 所有節點操作 )
cat >> /etc/sysctl.conf<<EOF net.ipv4.ip_forward=1 net.bridge.bridge-nf-call-iptables=1 net.ipv4.neigh.default.gc_thresh1=4096 net.ipv4.neigh.default.gc_thresh2=6144 net.ipv4.neigh.default.gc_thresh3=8192 EOF# 加載配置文件 sysctl -p1.9 添加主機名與IP對應的關系( 所有節點操作 )
cat >> /etc/hosts << EOF 1.117.61.155 nginx 81.68.101.212 k8s-node01 81.68.229.215 k8s-node02 EOF1.10 加載ipvs相關模塊
由于ipvs已經加入到了內核的主干,所以為kube-proxy開啟ipvs的前提需要加載以下的內核模塊:
在所有的Kubernetes節點執行以下腳本:
2 安裝docker(所有節點操作)
# 1 安裝依賴工具yum install -y yum-utils# 2 配置docker倉庫sudo yum-config-manager \--add-repo \https://download.docker.com/linux/centos/docker-ce.repo# 國外的倉庫很慢,可以使用國內的鏡像快一點 sudo yum-config-manager \--add-repo \http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo# 3 安裝docker # sudo yum install docker-ce docker-ce-cli containerd.io # 3 指定版本docekr 例如 18.09.7 # 這里不能用最新版本的,經過測試,最新版本和rek2不兼容。# yum list docker-ce --showduplicates | sort -r # 查看可用版本#yum install docker-ce-18.09.7 docker-ce-cli-18.09.7 # 4 配置騰訊云鏡像 cat > /etc/docker/daemon.json <EOF{"registry-mirrors": ["https://mirror.ccs.tencentyun.com"]} EOF # 5 重啟啟動 systemctl restart docker3 rke部署集群
# 1 添加用戶,rke不能用root操作(三個節點),創建普通用戶rke useradd ops -G docker echo "123456" | passwd --stdin ops# 2 下載 rke_linux-amd64并放到添加權限,移動到命令文件中(node1操作) # 下載地址 https://github.com/rancher/rke/releases/tag/v1.0.11 chmod +x rke_linux-amd64 cp rke_linux-amd64 /usr/local/bin/rke# 3 節點之間免密登錄(node1操作) su - ops ssh-keygen # (三次enter)ssh-copy-id ops@81.68.101.212# yes, 輸入密碼 ssh-copy-id ops@81.68.229.215 # 4 生成集群配置yml文件 cat > cluster.yml << EOF nodes: - address: 81.68.101.212port: "22"internal_address: ""role:- controlplane- worker- etcdhostname_override: ""user: opsdocker_socket: /var/run/docker.sockssh_key: ""ssh_key_path: ~/.ssh/id_rsassh_cert: ""ssh_cert_path: ""labels: {}taints: [] - address: 81.68.229.215port: "22"internal_address: ""role:- controlplane- worker- etcdhostname_override: ""user: opsdocker_socket: /var/run/docker.sockssh_key: ""ssh_key_path: ~/.ssh/id_rsassh_cert: ""ssh_cert_path: ""labels: {}taints: [] services:etcd:image: ""extra_args: {}extra_binds: []extra_env: []external_urls: []ca_cert: ""cert: ""key: ""path: ""uid: 0gid: 0snapshot: nullretention: ""creation: ""backup_config:enabled: trueinterval_hours: 12retention: 6kube-api:image: ""extra_args: {}extra_binds: []extra_env: []service_cluster_ip_range: 10.43.0.0/16service_node_port_range: ""pod_security_policy: falsealways_pull_images: falsesecrets_encryption_config: nullaudit_log: nulladmission_configuration: nullevent_rate_limit: nullkube-controller:image: ""extra_args: {}extra_binds: []extra_env: []cluster_cidr: 10.42.0.0/16service_cluster_ip_range: 10.43.0.0/16scheduler:image: ""extra_args: {}extra_binds: []extra_env: []kubelet:image: ""extra_args: {}extra_binds: []extra_env: []cluster_domain: cluster.localinfra_container_image: ""cluster_dns_server: 10.43.0.10fail_swap_on: falsegenerate_serving_certificate: falsekubeproxy:image: ""extra_args: {}extra_binds: []extra_env: [] network:plugin: canaloptions: {}mtu: 0node_selector: {} authentication:strategy: x509sans: []webhook: null addons: "" addons_include: [] system_images:etcd: rancher/coreos-etcd:v3.4.3-rancher1alpine: rancher/rke-tools:v0.1.59nginx_proxy: rancher/rke-tools:v0.1.59cert_downloader: rancher/rke-tools:v0.1.59kubernetes_services_sidecar: rancher/rke-tools:v0.1.59kubedns: rancher/k8s-dns-kube-dns:1.15.0dnsmasq: rancher/k8s-dns-dnsmasq-nanny:1.15.0kubedns_sidecar: rancher/k8s-dns-sidecar:1.15.0kubedns_autoscaler: rancher/cluster-proportional-autoscaler:1.7.1coredns: rancher/coredns-coredns:1.6.5coredns_autoscaler: rancher/cluster-proportional-autoscaler:1.7.1nodelocal: rancher/k8s-dns-node-cache:1.15.7kubernetes: rancher/hyperkube:v1.17.9-rancher1flannel: rancher/coreos-flannel:v0.12.0flannel_cni: rancher/flannel-cni:v0.3.0-rancher6calico_node: rancher/calico-node:v3.13.4calico_cni: rancher/calico-cni:v3.13.4calico_controllers: rancher/calico-kube-controllers:v3.13.4calico_ctl: rancher/calico-ctl:v3.13.4calico_flexvol: rancher/calico-pod2daemon-flexvol:v3.13.4canal_node: rancher/calico-node:v3.13.4canal_cni: rancher/calico-cni:v3.13.4canal_flannel: rancher/coreos-flannel:v0.12.0canal_flexvol: rancher/calico-pod2daemon-flexvol:v3.13.4weave_node: weaveworks/weave-kube:2.6.4weave_cni: weaveworks/weave-npc:2.6.4pod_infra_container: rancher/pause:3.1ingress: rancher/nginx-ingress-controller:nginx-0.32.0-rancher1ingress_backend: rancher/nginx-ingress-controller-defaultbackend:1.5-rancher1metrics_server: rancher/metrics-server:v0.3.6windows_pod_infra_container: rancher/kubelet-pause:v0.1.4 ssh_key_path: ~/.ssh/id_rsa ssh_cert_path: "" ssh_agent_auth: false authorization:mode: rbacoptions: {} ignore_docker_version: false kubernetes_version: "" private_registries: [] ingress:provider: ""options: {}node_selector: {}extra_args: {}dns_policy: ""extra_envs: []extra_volumes: []extra_volume_mounts: [] cluster_name: "" cloud_provider:name: "" prefix_path: "" addon_job_timeout: 0 bastion_host:address: ""port: ""user: ""ssh_key: ""ssh_key_path: ""ssh_cert: ""ssh_cert_path: "" monitoring:provider: ""options: {}node_selector: {} restore:restore: falsesnapshot_name: "" dns: null EOF# 5 啟動 rke up這一步可能會出現下面幾個錯誤:
這是因為yaml中 : 和 - 后面必須空格, 或者要按照格式對其,按照提示的行數查看問題,然后修改就可以了
(vi編輯器中 :set nu 可以顯示行數)
這個我遇到最多的問題,按照描述是網絡的問題,連接不上這兩個端口,我在服務器的防火墻里開放這兩個端口可以解決, 但是還會出現其他的端口無法連接,索性我添加了 開通 tcp所有端口 。就可以了
這個問題重新rke up 就可以解決,我也沒有搞清楚是什么原因.
這是部署成功的提示
rke up ... ... INFO[0049] [addons] Saving ConfigMap for addon rke-ingress-controller to Kubernetes INFO[0049] [addons] Successfully saved ConfigMap for addon rke-ingress-controller to Kubernetes INFO[0049] [addons] Executing deploy job rke-ingress-controller INFO[0059] [ingress] ingress controller nginx deployed successfully INFO[0059] [addons] Setting up user addons INFO[0059] [addons] no user addons defined INFO[0059] Finished building Kubernetes cluster successfully4 安裝kubectl
# 1部署成功就可以看到集群配置的yml文件 [ops@node01 ~]$ ls cluster.rkestate cluster.yml kube_config_cluster.yml# 2 復制kube_config_cluster.yml 到 /root/.kube/config su root mkdir -p /root/.kube/ cp /home/ops/kube_config_cluster.yml /root/.kube/config# 3 安裝kubectl cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOFyum install -y kubectl# 4 查看集群狀態 [root@node01 rke]# kubectl get nodes -o wideNAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME k8s-master1 Ready controlplane,etcd 10h v1.17.9 10.0.4.6 <none> CentOS Linux 7 (Core) 3.10.0-1160.11.1.el7.x86_64 docker://19.3.13 k8s-master2 Ready controlplane,etcd 10h v1.17.9 10.0.4.15 <none> CentOS Linux 7 (Core) 3.10.0-1160.11.1.el7.x86_64 docker://19.3.13 k8s-node1 Ready worker 10h v1.17.9 10.0.4.16 <none> CentOS Linux 7 (Core) 3.10.0-1160.11.1.el7.x86_64 docker://19.3.13# 查看pods [root@node01 rke]# kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE ingress-nginx default-http-backend-67cf578fc4-pvktd 1/1 Running 0 5h37m ingress-nginx nginx-ingress-controller-s7fcj 1/1 Running 0 5h37m kube-system canal-4pc9k 2/2 Running 0 10h kube-system canal-nrpvx 2/2 Running 0 10h kube-system canal-vmqq9 2/2 Running 0 10h kube-system coredns-7c5566588d-j6xpw 1/1 Running 0 5h37m kube-system coredns-autoscaler-65bfc8d47d-2sg8d 1/1 Running 0 5h37m kube-system metrics-server-6b55c64f86-g56ww 1/1 Running 0 5h37m kube-system rke-coredns-addon-deploy-job-fss2d 0/1 Completed 0 5h37m kube-system rke-ingress-controller-deploy-job-vnjcl 0/1 Completed 0 5h37m kube-system rke-metrics-addon-deploy-job-8kj9x 0/1 Completed 0 5h37m kube-system rke-network-plugin-deploy-job-bjnvh 0/1 Completed 0 10h5 測試一下,起個nginx服務
# 1 YML文件 cat > nginx-dep.yml << EOF apiVersion: apps/v1 kind: Deployment metadata:name: nginx-deployment spec:selector:matchLabels:app: nginxreplicas: 2template:metadata:labels:app: nginxspec:containers:- name: nginximage: nginx:alpineports:- containerPort: 80 EOF # 2 創建 kubectl apply -f nginx-dep.yml# 3 查看狀態 kubectl get deployment nginx-deployment -o wide # 4 啟動svc,暴露端口cat > nginx-svc.yml << EOF apiVersion: v1 kind: Service metadata:name: nginx-service spec:selector:app: nginxports:- protocol: TCPport: 80targetPort: 80nodePort: 30080type: NodePort EOF # 啟動服務 kubectl apply -f nginx-svc.yml # 查看 [root@nginx ops]# kubectl get svc nginx-service -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR nginx-service NodePort 10.43.107.8 <none> 80:30080/TCP 102s app=nginx# 詳細 [root@nginx ops]# kubectl describe svc nginx-service Name: nginx-service Namespace: default Labels: <none> Annotations: <none> Selector: app=nginx Type: NodePort IP Families: <none> IP: 10.43.107.8 IPs: <none> Port: <unset> 80/TCP TargetPort: 80/TCP NodePort: <unset> 30080/TCP Endpoints: 10.42.0.5:80,10.42.1.4:80 Session Affinity: None External Traffic Policy: Cluster Events: <none>訪問81.68.229.215:30080
總結
- 上一篇: 2016年第七届C/C++ B组蓝桥杯省
- 下一篇: 511遇见易语言数组操作数组排序