日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問(wèn) 生活随笔!

生活随笔

當(dāng)前位置: 首頁(yè) > 编程资源 > 编程问答 >内容正文

编程问答

再战 k8s(十八):RKE

發(fā)布時(shí)間:2024/3/26 编程问答 36 豆豆
生活随笔 收集整理的這篇文章主要介紹了 再战 k8s(十八):RKE 小編覺(jué)得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

RKE 簡(jiǎn)介

Rancher Kubernetes Engine,簡(jiǎn)稱(chēng) RKE,是一個(gè)經(jīng)過(guò) CNCF 認(rèn)證的 Kubernetes 安裝程序。RKE 支持多種操作系統(tǒng),包括 MacOS、Linux 和 Windows,可以在裸金屬服務(wù)器(BMS)和虛擬服務(wù)器(Virtualized Server)上運(yùn)行。

市面上的其他 Kubernetes 部署工具存在一個(gè)共性問(wèn)題:在使用工具之前需要滿(mǎn)足的先決條件比較多,例如,在使用工具前需要完成安裝 kubelet、配置網(wǎng)絡(luò)等一系列的繁瑣操作。而 RKE 簡(jiǎn)化了部署 Kubernetes 集群的過(guò)程,只有一個(gè)先決條件:只要您使用的 Docker 是 RKE 支持的版本,就可以通過(guò) RKE 安裝 Kubernetes,部署和運(yùn)行 Kubernetes 集群。RKE 既可以單獨(dú)使用,作為創(chuàng)建 Kubernetes 集群的工具,也可以配合 Rancher2.x 使用,作為 Rancher2.x 的組件,在 Rancher 中部署和運(yùn)行 Kubernetes 集群。


創(chuàng)建集群配置文件

RKE 使用集群配置文件cluster.yml規(guī)劃集群中的節(jié)點(diǎn),例如集群中應(yīng)該包含哪些節(jié)點(diǎn),如何部署 Kubernetes。您可以通過(guò)該文件修改很多集群配置選項(xiàng)。在 RKE 的文檔中,我們提供的代碼示例假設(shè)集群中只有一個(gè)節(jié)點(diǎn)。

創(chuàng)建集群配置文件cluster.yml的方式有兩種:

  • 使用 minimal cluster.yml創(chuàng)建集群配置文件,然后將您使用的節(jié)點(diǎn)的相關(guān)信息添加到文件中。
  • 使用rke config命令創(chuàng)建集群配置文件,然后將集群參數(shù)逐個(gè)輸入到該文件中。

使用rke config

運(yùn)行rke config命令,在當(dāng)前路徑下創(chuàng)建 cluster.yml文件。這條命令會(huì)引導(dǎo)您輸入創(chuàng)建集群所需的所有參數(shù),詳情請(qǐng)參考集群配置選項(xiàng)。

rke config --name cluster.yml

其他配置選項(xiàng)

在原有創(chuàng)建集群配置文件命令的基礎(chǔ)上,加上 --empty ,可以創(chuàng)建一個(gè)空白的集群配置文件。

rke config --empty --name cluster.yml

您也可以使用--print,將cluster.yml文件的內(nèi)容顯示出來(lái)。

rke config --print

高可用集群

RKE 適配了高可用集群,您可以在cluster.yml文件中配置多個(gè)controlplane節(jié)點(diǎn)。RKE 會(huì)把 master 節(jié)點(diǎn)的組件部署在所有被列為controlplane的節(jié)點(diǎn)上,同時(shí)把 kubelets 的默認(rèn)連接地址配置為127.0.0.1:6443。這個(gè)地址是nginx-proxy請(qǐng)求所有 master 節(jié)點(diǎn)的地址

創(chuàng)建高可用集群需要指定兩個(gè)或更多的節(jié)點(diǎn)作為controlplane。

證書(shū)

v0.2.0 開(kāi)始可用

默認(rèn)情況下,Kubernetes 集群需要用到證書(shū),而 RKE 會(huì)自動(dòng)為所有集群組件生成證書(shū)。您也可以使用自定義證書(shū)。部署集群后,您可以管理這些自動(dòng)生成的證書(shū),詳情請(qǐng)參考管理自動(dòng)生成的證書(shū)。


使用 RKE 部署 Kubernetes 集群

創(chuàng)建了cluster.yml文件后,您可以運(yùn)行以下命令部署集群。這條命令默認(rèn)cluster.yml已經(jīng)保存在了您運(yùn)行命令所處的路徑下。

rke up

INFO[0000] Building Kubernetes cluster

INFO[0000] [dialer] Setup tunnel *for* host [10.0.0.1]

INFO[0000] [network] Deploying port listener containers

INFO[0000] [network] Pulling image [alpine:latest] on host [10.0.0.1]

...

INFO[0101] Finished building Kubernetes cluster successfully

運(yùn)行該命令后,返回的最后一行信息應(yīng)該是Finished building Kubernetes cluster successfully,表示成功部署集群,可以開(kāi)始使用集群。在創(chuàng)建 Kubernetes 集群的過(guò)程中,會(huì)創(chuàng)建一個(gè)kubeconfig 文件,它的文件名稱(chēng)是 kube_config_cluster.yml,您可以使用它控制 Kubernetes 集群。

說(shuō)明

如果您之前使用的集群配置文件名稱(chēng)不是cluster.yml,那么這里生成的 kube_config 文件的名稱(chēng)也會(huì)隨之變化為kube_config*<FILE_NAME>.yml。


保存文件#

重要

請(qǐng)保存下文中列出來(lái)的所有文件,這些文件可以用于維護(hù)集群,排查問(wèn)題和升級(jí)集群。

請(qǐng)將這些文件復(fù)制并保存到安全的位置:

  • cluster.yml:RKE 集群的配置文件。
  • kube_config_cluster.yml:該集群的Kubeconfig 文件包含了獲取該集群所有權(quán)限的認(rèn)證憑據(jù)。
  • cluster.rkestate:Kubernetes 集群狀態(tài)文件,包含了獲取該集群所有權(quán)限的認(rèn)證憑據(jù),使用 RKE v0.2.0 時(shí)才會(huì)創(chuàng)建這個(gè)文件。
說(shuō)明

kube_config_cluster.yml和cluster.rkestate兩個(gè)文件的名稱(chēng)取決于您如何命名 RKE 集群配置文件,如果您修改的集群配置文件的名稱(chēng),那么后兩個(gè)文件的名稱(chēng)可能會(huì)跟上面列出來(lái)的文件名稱(chēng)不一樣。

Kubernetes 集群狀態(tài)文件#

Kubernetes 集群狀態(tài)文件用集群配置文件cluster.yml以及集群中的組件證書(shū)組成。不同版本的 RKE 會(huì)將文件保存在不同的地方。

v0.2.0 以及更新版本的 RKE 會(huì)在保存集群配置文件 cluster.yml的路徑下創(chuàng)建一個(gè).rkestate文件。該文件包含當(dāng)前集群的狀態(tài)、RKE 配置信息和證書(shū)信息。請(qǐng)妥善保存該文件的副本。

v0.2.0 之前的版本的 RKE 會(huì)將集群狀態(tài)存儲(chǔ)以密文的形式存儲(chǔ)。更新集群狀態(tài)時(shí),RKE 拉取這些密文,修改集群狀態(tài),然后將新的集群狀態(tài)再次存儲(chǔ)為密文。

相關(guān)操作#

完成 RKE 安裝后,可能還需要完成以下兩個(gè)相關(guān)操作:

  • 管理證書(shū)
  • 添加或移除節(jié)點(diǎn)

cluster.yml 文件示例

您可通過(guò)編輯 RKE 的集群配置文件cluster.yml,完成多種配置選項(xiàng)。以下是最小文件示例和完整文件示例。

**說(shuō)明:**如果您使用的是 Rancher v2.0.5 或 v2.0.6,使用集群配置文件,配置集群選項(xiàng)時(shí),服務(wù)名稱(chēng)不能含有除了英文字母和下劃線外的其他字符。

最小文件示例#

nodes:

- address: 1.2.3.4

? user: ubuntu

? role:

? - controlplane

? - etcd

? - worker

完整文件示例#

nodes:

- address: 1.1.1.1

? user: ubuntu

? role:

? - controlplane

? - etcd

? port: 2222

? docker_socket: /var/run/docker.sock

? - address: 2.2.2.2

? user: ubuntu

? role:

? - worker

? ssh_key_path: /home/user/.ssh/id_rsa

? ssh_key: |-

? -----BEGIN RSA PRIVATE KEY-----

? -----END RSA PRIVATE KEY-----

? ssh_cert_path: /home/user/.ssh/test-key-cert.pub

? ssh_cert: |-

? ssh-rsa-cert-v01@openssh.com AAAAHHNzaC1yc2EtY2VydC12MDFAb3Bl…

? - address: example.com

? user: ubuntu

? role:

? - worker

? hostname_override: node3

? internal_address: 192.168.1.6

? labels:

? app: ingress

? taints:

? - key: test-key

? value: test-value

? effect: NoSchedule

# If set to true, RKE will not fail when unsupported Docker version

# are found

ignore_docker_version: false

# Enable running cri-dockerd

# Up to Kubernetes 1.23, kubelet contained code called dockershim

# to support Docker runtime. The replacement is called cri-dockerd

# and should be enabled if you want to keep using Docker as your

# container runtime

# Only available to enable in Kubernetes 1.21 and higher

enable_cri_dockerd: true

# Cluster level SSH private key

# Used if no ssh information is set for the node

ssh_key_path: ~/.ssh/test

# Enable use of SSH agent to use SSH private keys with passphrase

# This requires the environment SSH_AUTH_SOCK configured pointing

#to your SSH agent which has the private key added

ssh_agent_auth: true

# List of registry credentials

# If you are using a Docker Hub registry, you can omit the url

# or set it to docker.io

# is_default set to true will override the system default

# registry set in the global settings

private_registries:

? - url: registry.com

? user: Username

? password: password

? is_default: true

# Bastion/Jump host configuration

bastion_host:

? address: x.x.x.x

? user: ubuntu

? port: 22

? ssh_key_path: /home/user/.ssh/bastion_rsa

# or

# ssh_key: |-

# -----BEGIN RSA PRIVATE KEY-----

#

# -----END RSA PRIVATE KEY-----

# Set the name of the Kubernetes cluster

cluster_name: mycluster

# The Kubernetes version used. The default versions of Kubernetes

# are tied to specific versions of the system images.

#

# For RKE v0.2.x and below, the map of Kubernetes versions and their system images is

# located here:

# https://github.com/rancher/types/blob/release/v2.2/apis/management.cattle.io/v3/k8s_defaults.go

#

# For RKE v0.3.0 and above, the map of Kubernetes versions and their system images is

# located here:

# https://github.com/rancher/kontainer-driver-metadata/blob/master/rke/k8s_rke_system_images.go

#

# In case the kubernetes_version and kubernetes image in

# system_images are defined, the system_images configuration

# will take precedence over kubernetes_version.

kubernetes_version: v1.10.3-rancher2

# System Images are defaulted to a tag that is mapped to a specific

# Kubernetes Version and not required in a cluster.yml.

# Each individual system image can be specified if you want to use a different tag.

#

# For RKE v0.2.x and below, the map of Kubernetes versions and their system images is

# located here:

# https://github.com/rancher/types/blob/release/v2.2/apis/management.cattle.io/v3/k8s_defaults.go

#

# For RKE v0.3.0 and above, the map of Kubernetes versions and their system images is

# located here:

# https://github.com/rancher/kontainer-driver-metadata/blob/master/rke/k8s_rke_system_images.go

#

system_images:

? kubernetes: rancher/hyperkube:v1.10.3-rancher2

? etcd: rancher/coreos-etcd:v3.1.12

? alpine: rancher/rke-tools:v0.1.9

? nginx_proxy: rancher/rke-tools:v0.1.9

? cert_downloader: rancher/rke-tools:v0.1.9

? kubernetes_services_sidecar: rancher/rke-tools:v0.1.9

? kubedns: rancher/k8s-dns-kube-dns-amd64:1.14.8

? dnsmasq: rancher/k8s-dns-dnsmasq-nanny-amd64:1.14.8

? kubedns_sidecar: rancher/k8s-dns-sidecar-amd64:1.14.8

? kubedns_autoscaler: rancher/cluster-proportional-autoscaler-amd64:1.0.0

? pod_infra_container: rancher/pause-amd64:3.1

services:

? etcd:

? # Custom uid/guid for etcd directory and files

? uid: 52034

? gid: 52034

? # if external etcd is used

? # path: /etcdcluster

? # external_urls:

? # - https://etcd-example.com:2379

? # ca_cert: |-

? # -----BEGIN CERTIFICATE-----

? # xxxxxxxxxx

? # -----END CERTIFICATE-----

? # cert: |-

? # -----BEGIN CERTIFICATE-----

? # xxxxxxxxxx

? # -----END CERTIFICATE-----

? # key: |-

? # -----BEGIN PRIVATE KEY-----

? # xxxxxxxxxx

? # -----END PRIVATE KEY-----

? # Note for Rancher v2.0.5 and v2.0.6 users: If you are configuring

? # Cluster Options using a Config File when creating Rancher Launched

? # Kubernetes, the names of services should contain underscores

? # only: kube_api.

? kube-api:

? # IP range for any services created on Kubernetes

? # This must match the service_cluster_ip_range in kube-controller

? service_cluster_ip_range: 10.43.0.0/16

? # Expose a different port range for NodePort services

? service_node_port_range: 30000-32767

? pod_security_policy: false

? # Encrypt secret data at Rest

? # Available as of v0.3.1

? secrets_encryption_config:

? enabled: true

? custom_config:

? apiVersion: apiserver.config.k8s.io/v1

? kind: EncryptionConfiguration

? resources:

? - resources:

? - secrets

? providers:

? - aescbc:

? keys:

? - name: k-fw5hn

? secret: RTczRjFDODMwQzAyMDVBREU4NDJBMUZFNDhCNzM5N0I=

? - identity: {}

? # Enable audit logging

? # Available as of v1.0.0

? audit_log:

? enabled: true

? configuration:

? max_age: 6

? max_backup: 6

? max_size: 110

? path: /var/log/kube-audit/audit-log.json

? format: json

? policy:

? apiVersion: audit.k8s.io/v1 # This is required.

? kind: Policy

? omitStages:

? - “RequestReceived”

? rules:

? # Log pod changes at RequestResponse level

? - level: RequestResponse

? resources:

? - group: “”

? # Resource “pods” doesn’t match requests to any subresource of pods,

? # which is consistent with the RBAC policy.

? resources: [“pods”]

? # Using the EventRateLimit admission control enforces a limit on the number of events

? # that the API Server will accept in a given time period

? # Available as of v1.0.0

? event_rate_limit:

? enabled: true

? configuration:

? apiVersion: eventratelimit.admission.k8s.io/v1alpha1

? kind: Configuration

? limits:

? - type: Server

? qps: 6000

? burst: 30000

? # Enable AlwaysPullImages Admission controller plugin

? # Available as of v0.2.0

? always_pull_images: false

? # Add additional arguments to the kubernetes API server

? # This WILL OVERRIDE any existing defaults

? extra_args:

? # Enable audit log to stdout

? audit-log-path: “-”

? # Increase number of delete workers

? delete-collection-workers: 3

? # Set the level of log output to debug-level

? v: 4

? # Note for Rancher 2 users: If you are configuring Cluster Options

? # using a Config File when creating Rancher Launched Kubernetes,

? # the names of services should contain underscores only:

? # kube_controller. This only applies to Rancher v2.0.5 and v2.0.6.

? kube-controller:

? # CIDR pool used to assign IP addresses to pods in the cluster

? cluster_cidr: 10.42.0.0/16

? # IP range for any services created on Kubernetes

? # This must match the service_cluster_ip_range in kube-api

? service_cluster_ip_range: 10.43.0.0/16

? # Add additional arguments to the kubernetes API server

? # This WILL OVERRIDE any existing defaults

? extra_args:

? # Set the level of log output to debug-level

? v: 4

? # Enable RotateKubeletServerCertificate feature gate

? feature-gates: RotateKubeletServerCertificate=true

? # Enable TLS Certificates management

? # https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/

? cluster-signing-cert-file: “/etc/kubernetes/ssl/kube-ca.pem”

? cluster-signing-key-file: “/etc/kubernetes/ssl/kube-ca-key.pem”

? kubelet:

? # Base domain for the cluster

? cluster_domain: cluster.local

? # IP address for the DNS service endpoint

? cluster_dns_server: 10.43.0.10

? # Fail if swap is on

? fail_swap_on: false

? # Configure pod-infra-container-image argument

? pod-infra-container-image: “k8s.gcr.io/pause:3.2”

? # Generate a certificate signed by the kube-ca Certificate Authority

? # for the kubelet to use as a server certificate

? # Available as of v1.0.0

? generate_serving_certificate: true

? extra_args:

? # Set max pods to 250 instead of default 110

? max-pods: 250

? # Enable RotateKubeletServerCertificate feature gate

? feature-gates: RotateKubeletServerCertificate=true

? # Optionally define additional volume binds to a service

? extra_binds:

? - “/usr/libexec/kubernetes/kubelet-plugins:/usr/libexec/kubernetes/kubelet-plugins”

? scheduler:

? extra_args:

? # Set the level of log output to debug-level

? v: 4

? kubeproxy:

? extra_args:

? # Set the level of log output to debug-level

? v: 4

# Currently, only authentication strategy supported is x509.

# You can optionally create additional SANs (hostnames or IPs) to

# add to the API server PKI certificate.

# This is useful if you want to use a load balancer for the

# control plane servers.

authentication:

strategy: x509

sans:

? - “10.18.160.10”

? - “my-loadbalancer-1234567890.us-west-2.elb.amazonaws.com”

# Kubernetes Authorization mode

# Use mode: rbac to enable RBAC

# Use mode: none to disable authorization

authorization:

mode: rbac

# If you want to set a Kubernetes cloud provider, you specify

# the name and configuration

cloud_provider:

name: aws

# Add-ons are deployed using kubernetes jobs. RKE will give

# up on trying to get the job status after this timeout in seconds…

addon_job_timeout: 30

# Specify network plugin-in (canal, calico, flannel, weave, or none)

network:

plugin: canal

# Specify MTU

mtu: 1400

options:

? # Configure interface to use for Canal

? canal_iface: eth1

? canal_flannel_backend_type: vxlan

? # Available as of v1.2.6

? canal_autoscaler_priority_class_name: system-cluster-critical

? canal_priority_class_name: system-cluster-critical

# Available as of v1.2.4

tolerations:

- key: “node.kubernetes.io/unreachable”

? operator: “Exists”

? effect: “NoExecute”

? tolerationseconds: 300

- key: “node.kubernetes.io/not-ready”

? operator: “Exists”

? effect: “NoExecute”

? tolerationseconds: 300

# Available as of v1.1.0

update_strategy:

? strategy: RollingUpdate

? rollingUpdate:

? maxUnavailable: 6

# Specify DNS provider (coredns or kube-dns)

dns:

provider: coredns

# Available as of v1.1.0

update_strategy:

? strategy: RollingUpdate

? rollingUpdate:

? maxUnavailable: 20%

? maxSurge: 15%

linear_autoscaler_params:

? cores_per_replica: 0.34

? nodes_per_replica: 4

? prevent_single_point_failure: true

? min: 2

? max: 3

# Specify monitoring provider (metrics-server)

monitoring:

provider: metrics-server

# Available as of v1.1.0

update_strategy:

? strategy: RollingUpdate

? rollingUpdate:

? maxUnavailable: 8

# Currently only nginx ingress provider is supported.

# To disable ingress controller, set provider: none

# node_selector controls ingress placement and is optional

ingress:

provider: nginx

node_selector:

? app: ingress

# Available as of v1.1.0

update_strategy:

? strategy: RollingUpdate

? rollingUpdate:

? maxUnavailable: 5

# All add-on manifests MUST specify a namespace

addons: |-

apiVersion: v1

kind: Pod

metadata:

? name: my-nginx

? namespace: default

spec:

? containers:

? - name: my-nginx

? image: nginx

? ports:

? - containerPort: 80

addons_include:

- https://raw.githubusercontent.com/rook/rook/master/cluster/examples/kubernetes/rook-operator.yaml

- https://raw.githubusercontent.com/rook/rook/master/cluster/examples/kubernetes/rook-cluster.yaml

- /path/to/manifest

總結(jié)

以上是生活随笔為你收集整理的再战 k8s(十八):RKE的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。

如果覺(jué)得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。