日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

Installing Kubernetes Using ‘kubeadm’

發布時間:2023/12/16 编程问答 33 豆豆
生活随笔 收集整理的這篇文章主要介紹了 Installing Kubernetes Using ‘kubeadm’ 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

2019獨角獸企業重金招聘Python工程師標準>>>

This article aims at explaining ?what exactly is a Kubernetes (a.k.a K8S) cluster and what are the important resources present in it. It also talks about the ?the basic K8S resource hierarchy, and Kubernetes release 1.5.2 is used ?reference here.

Deployment Components of K8S:

K8S Master

Also known as the ‘control plane’ it is a single node (at the time of this writing) which hosts the following list of components:

  • etcd: All persistent master state is stored in an instance of etcd. This provides a great way to store configuration data reliably.
  • Kubernetes API Server: The API server serves up the K8S APIs. It is intended to be a CRUD-y server, with most/all business logic implemented in separate components or in plug-ins. It mainly processes REST operations, validates them, and updates the corresponding objects in etcd (and eventually other stores).
  • Scheduler: The scheduler binds unscheduled pods to nodes via the ‘/binding’ API.
  • Controller Manager Server: All other cluster-level functions are currently performed by the Controller Manager. For instance, endpoints objects are created and updated by the endpoints controller, and nodes are discovered, managed, and monitored by the node controller.

K8S Worker

K8S Workers (or minions as they were referred previously) are responsible for hosting the below list of components:

  • kubelet: The kubelet manages Pods and their containers, their images, their volumes, etc.
  • kube-proxy: Each node also runs a simple network proxy and load balancer. This reflects services as defined in the Kubernetes API on each node and can do simple TCP and UDP stream forwarding (round robin) across a set of backends.

An important point to note is that all components present on the worker node are also running and present on the master node.

Various Components of a K8S Cluster are:

Service components are the basic set of executables that allow an user to interact with the K8S cluster.

  • Kubeadm: The command to bootstrap the cluster.
  • Kubectl: The command to control the cluster once it’s running. You will only need this on the master, but it can be useful to have on the other nodes as well.
  • Kublet: The most core component of K8S. It runs on all of the machines in your cluster and does things like starting the pods and containers.

Logical resources of a K8S Cluster include:

  • Pods: It models a group of applications or services that are used to run on the same server in the pre-container world. In a containerized world these applications or services are nothing but running containers. Containers inside a pod share the same network namespace and can share data volumes as well.
  • Deployment: From version 1.5.x on-wards K8S creates and manages sets of replicated containers (actually, replicated Pods) using Deployments. A Deployment simply ensures that a specified number of pod “replicas” are running at any one time. If there are too many, it will kill some. If there are too few, it will start more.
  • Replication Controllers: A RC ensures that a number of pods of a given service is always running across the cluster.
  • Labels: They are key-value metadata that can be attached to any K8S resource (pods, RCs, services, nodes, etc).
  • Services: A K8S service provides a stable endpoint (fixed virtual IP + port binding to the host servers) for a group of pods managed by a replication controller.
  • Volumes: A Volume is a directory on disk or in another container. A volume outlives any containers that run within the Pod, and the data is preserved across container restarts. The directory, the medium that backs it, and the contents within it are determined by the particular volume type used.
  • Selector: A selector is an expression that matches labels in order to identify related resources, such as which pods are targeted by a load-balanced service.
  • Name: A user- or client-provided name for a resource.
  • Namespace: A namespace is like a prefix to the name of a resource. Namespaces help different projects, teams, or customers to share a cluster, such as by preventing name collisions between unrelated teams.

Resource Hierarchy for K8S Resources

The below image captures the way in which few of the K8S resources can be deployed and shows how they relate to each other.

K8S Cluster Creation/ Installation

Finalize what is the infrastructure that you would like to use for K8S cluster. Below is the setup that I used for my testing:

  • VirtualBox 5.1.14 – to create different VMs.
  • 1 CentOS 7 VM – to be configured as K8S master node.
  • 2 CentOS 7 VMs – to be configured as K8S nodes.

Installation Steps

  • Installing kubelet and kubeadm on your hosts:

kubelet is the most core component of K8S. It runs on all of the machines in your cluster and does things like starting pods and containers. kubeadm is the command to bootstrap the cluster. For both of these components to work you would also need to install – docker, kubectl, kubernetes-cni. Login to your host and become a root user by giving ‘su’ command. Following is the list of commands that you can give to install all these packages:

cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=http://yum.kubernetes.io/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg EOF setenforce 0 yum install -y docker kubelet kubeadm kubectl kubernetes-cni systemctl enable docker && systemctl start docker systemctl enable kubelet && systemctl start kubelet
  • Initializing your Master:

To initialize the master, pick one of the machines you previously installed kubelet and kubeadm on, and run:

$ getenforce # returns the mode of SELinux. E.g. ‘Enforcing’ by default $ setenforce 0 # sets the SELinux mode to ‘Permissive’ $ kubeadm init # initialize and start the master


By default the Security-Enhanced Linux (i.e. SELinux) feature is enabled on CentOS 7.2. Commands ‘getenforce’ and ‘setenforce’ allow you to change the ‘SELinux’ mode so that ‘kubeadm init’ could start properly.

Note: ‘kubeadm init’ will auto-detect the network interface to advertise the master on as the interface with the default gateway.

‘kubeadm init’ will download and install the cluster database and “control plane” components. This may take several minutes. The output should look like:

[token-discovery] kube-discovery is ready after 3.002745 seconds
[addons] Created essential addon: kube-proxy
[addons] Created essential addon: kube-dns

Your Kubernetes master has initialized successfully!

You should now deploy a pod network to the cluster.
Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:
http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node:

kubeadm join –token=6cb263.fbb4386199596a92
Make a record of the kubeadm join command that kubeadm init outputs. You will need this to register the worker nodes with master. The key included here is secret, keep it safe — anyone with this key can add authenticated nodes to your cluster.

There are times when ‘kubeadm init’ just won’t complete on CentOS 7 version 7.2.1511 (Core). To resolve this check whether the ‘setenforce 0’ command is executed before ‘kubeadm init’.

?

  • Installing a pod network:

You must install a pod network add-on so that your pods can communicate with each other. It is necessary to do this before you try to deploy any applications to your cluster, and before kube-dns will start up. Note also that kubeadm only supports CNI based networks and therefore kubenet based networks will not work. Following are a few learnings:

  • You should install a CNI based virtual network implementation e.g. flannel before you try and start the POD network. For this I ended up building the latest flannel GitHub (https://github.com/coreos/flannel) code. You can also install flannel using rpms (http://cbs.centos.org/repos/virt7-docker-common-release/x86_64/os/).
  • Once installed you need to start the flannel daemon. If this is the first time you are running flannel and have not configured it you would keep getting errors “Failed to retrieve network config: client: etcd cluster is unavailable or misconfigured” on the console. The errors won’t stop until you start the “etcd” component for K8S and add basic network configuration against key “/coreos.com/network/config“. More details on this in coming points.
  • Execute command “kubectl create -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml”. For me the command “kubectl apply -f <add-on.yaml>” given on the official site did not worked as I kept getting validation error for the kube-flannel.yml.
  • The ‘etcdctl’ utility was not installed by default using the ‘kubeadm’ tool hence instead of using ‘etcdctl’ for adding the network configuration value to ‘etcd’ component I ended up using ‘curl’ commands. Following is the list of commands:$ curl -L -X PUT http://127.0.0.1:2379/v2/keys/coreos.com/network/config -d value=”{\”Network\”: \”10.0.0.0/8\”, \”SubnetLen\”: 20,\”SubnetMin\”: \”10.10.0.0\”,\”SubnetMax\”: \”10.99.0.0\”,\”Backend\”: {\”Type\”: \”udp\”, \”Port\”: 7890}}”
    $ curl -L http://127.0.0.1:2379/v2/keys/coreos.com/network/config

    ?

    The first command creates the key ‘coreos.com/network/config’ with provided values in JSON format. The second command retrieves the key value on command prompt to verify whether the value is set properly.

    After execution of above ‘curl’ commands the flannel daemon would fetch the added configuration from ‘etcd’ and would start watching for new subnet leases. Note, you might need to restart the master by executing ‘kubeadm reset’ and ‘kubeadm init –pod-network-cidr <subnet range>’ commands if you have not initialized the master with the subnet range initially. For my setup I had used the subnet range of 10.244.0.0/16.

    You can verify whether all the master PODs have started or not by giving command ‘kubctl get pods –all-namespaces‘. You should be able to see the ‘kube-dns’ pod status.

  • ?

    • Joining the nodes:

    For any new machine that needs to be added as node to your cluster, for each such machine: SSH to that machine, become root (e.g. sudo su -) and run the command that was output by kubeadm init. For example:

    $ kubeadm join –token <token> <master-ip>
    [kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
    [preflight] Running pre-flight checks
    [preflight] Starting the kubelet service
    [tokens] Validating provided token

    Node join complete:
    * Certificate signing request sent to master and response
    received.
    * Kubelet informed of new secure connection details.

    Run ‘kubectl get nodes’ on the master to see this machine join

    ?

    If you are interested in learning about how REAN Cloud can support your container requirements and implement a DevOps transformation methodology, please contact us at info@reancloud.com

    References:

    • https://kubernetes.io/docs/user-guide/
    • https://dzone.com/storage/assets/2316499-dzone-refcardz233-kubernetes.pdf
    • https://www.vultr.com/docs/getting-started-with-kubernetes-on-centos-7
    • https://blog.couchbase.com/2016/march/kubernetes-namespaces-resource-quota-limits-qos-cluster
    • http://sharadchhetri.com/2013/02/27/how-to-disable-selinux-in-red-hat-or-centos/
    • http://webplay.pro/linux/change-hostname-permanently-centos-7.html

    轉載于:https://my.oschina.net/u/3362827/blog/896667

    總結

    以上是生活随笔為你收集整理的Installing Kubernetes Using ‘kubeadm’的全部內容,希望文章能夠幫你解決所遇到的問題。

    如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。

    主站蜘蛛池模板: 老司机免费视频 | 香蕉视频成人在线 | 久久久久成人精品 | 国产曰肥老太婆无遮挡 | 欧美日韩中文字幕视频 | 国产精品高潮呻吟久久 | 天天5g天天爽免费观看 | 久久一区二区三 | 美脚の诱脚舐め脚视频播放 | 91羞羞网站 | 老司机福利精品 | 中文国产字幕 | 美国免费高清电影在线观看 | 一区二区视屏 | 日韩a级片在线观看 | 欲色视频| 91大神精品| 国产成人久久精品流白浆 | 草逼视频网 | 伊人影院在线观看视频 | 久久99精品国产麻豆91樱花 | 国产毛片久久 | 精品国产1区2区 | 国产视频导航 | 久久综合网址 | 日韩三级一区二区三区 | 亚洲av无码精品色午夜 | 欧美丝袜脚交 | 欧美xxxx888 | 亚洲日本成人在线观看 | 国产肉体xxxx裸体784大胆 | 天天操夜夜操狠狠操 | 天堂av2018 | 久久精品欧美日韩 | 日韩欧美黄色片 | 日日噜噜噜| 视频在线观看免费 | 亚洲在线看片 | 日本黄视频网站 | av在线日韩 | 国产嫩草影院久久久 | 国内外成人在线视频 | 国产一级爱c视频 | av在线不卡播放 | 99热这里只| 成人黄色激情小说 | 奇米影视888| 国产剧情在线一区 | 日韩午夜激情电影 | 精品国产乱子伦一区二区 | 视频一区二区三区在线 | 91色多多 | 日本一级理论片在线大全 | 久久只有精品 | 四虎影院www| 91小宝寻花一区二区三区 | 亚洲国产欧美在线观看 | 久久久黄色片 | 国产精品成人午夜视频 | 91干网 | 国产亚洲欧美在线精品 | 成人勉费视频 | 丰满大乳奶做爰ⅹxx视频 | 欧美日韩首页 | 日本在线视频播放 | 天堂在线视频观看 | 婷婷在线视频 | 日韩av资源站 | 国产性―交一乱―色―情人 | 国产美女免费看 | 日韩成人在线视频 | 久久av网站 | 一级少妇毛片 | 动漫裸体无遮挡 | 色七七视频 | 日韩女同互慰一区二区 | 青青草原国产 | 国产特黄毛片 | 欧美另类z0zx974| 免费黄色看片 | 国产一级aa大片毛片 | 一级片免费在线观看 | 成人激情在线视频 | 狠狠干狠狠操视频 | 四虎影院永久地址 | 射进来av影视网 | www.国产91 | 亚洲av成人精品午夜一区二区 | 国产欧美三区 | 日韩无套无码精品 | 国产高清av | 男人的亚洲天堂 | 女人18毛片毛片毛片毛片区二 | 欧美在线国产 | 日本一级淫片色费放 | 1024手机看片日韩 | 亚洲视频成人 | 99er这里只有精品 | 欧美一级免费 |