kubernetes ui 搭建
?
1、部署Kubernetes云計算平臺,至少準備兩臺服務器,此處為3臺
Kubernetes Master節點:192.168.0.111 Kubernetes Node1節點:192.168.0.112 Kubernetes Node2節點:192.168.0.1132、每臺服務器主機都運行如下命令
systemctl stop firewalld systemctl disable firewalld yum -y install ntp ntpdate pool.ntp.org #保證每臺服務器時間一致性 systemctl start ntpd systemctl enable ntpd3、Kubernetes Master 安裝與配置 Kubernetes Master節點上安裝etcd和Kubernetes、flannel網絡,命令如下
yum install kubernetes-master etcd flannel -yMaster /etc/etcd/etcd.conf 配置文件,代碼如下
cat>/etc/etcd/etcd.conf<<EOF # [member] ETCD_NAME=etcd1 ETCD_DATA_DIR="/data/etcd" #ETCD_WAL_DIR="" #ETCD_SNAPSHOT_COUNT="10000" #ETCD_HEARTBEAT_INTERVAL="100" #ETCD_ELECTION_TIMEOUT="1000" ETCD_LISTEN_PEER_URLS="http://192.168.0.111:2380" ETCD_LISTEN_CLIENT_URLS="http://192.168.0.111:2379,http://127.0.0.1:2379" ETCD_MAX_SNAPSHOTS="5" #ETCD_MAX_WALS="5" #ETCD_CORS="" # #[cluster] ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.0.111:2380" # if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..." ETCD_INITIAL_CLUSTER="etcd1=http://192.168.0.111:2380,etcd2=http://192.168.0.112:2380,etcd3=http://192.168.0.113:2380" #ETCD_INITIAL_CLUSTER_STATE="new" #ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_ADVERTISE_CLIENT_URLS="http://192.168.0.111:2379" #ETCD_DISCOVERY="" #ETCD_DISCOVERY_SRV="" #ETCD_DISCOVERY_FALLBACK="proxy" #ETCD_DISCOVERY_PROXY="" # #[proxy] #ETCD_PROXY="off" #ETCD_PROXY_FAILURE_WAIT="5000" #ETCD_PROXY_REFRESH_INTERVAL="30000" #ETCD_PROXY_DIAL_TIMEOUT="1000" #ETCD_PROXY_WRITE_TIMEOUT="5000" #ETCD_PROXY_READ_TIMEOUT="0" # #[security] #ETCD_CERT_FILE="" #ETCD_KEY_FILE="" #ETCD_CLIENT_CERT_AUTH="false" #ETCD_TRUSTED_CA_FILE="" #ETCD_PEER_CERT_FILE="" #ETCD_PEER_KEY_FILE="" #ETCD_PEER_CLIENT_CERT_AUTH="false" #ETCD_PEER_TRUSTED_CA_FILE="" # #[logging] #ETCD_DEBUG="false" # examples for -log-package-levels etcdserver=WARNING,security=DEBUG #ETCD_LOG_PACKAGE_LEVELS="" EOF mkdir -p /data/etcd/;chmod 757 -R /data/etcd/ systemctl restart etcd.serviceMaster /etc/kubernetes/config配置文件,命令如下:
cat>/etc/kubernetes/config<<EOF # kubernetes system config # The following values are used to configure various aspects of all # kubernetes i, including # kube-apiserver.service # kube-controller-manager.service # kube-scheduler.service # kubelet.service # kube-proxy.service # logging to stderr means we get it in the systemd journal KUBE_LOGTOSTDERR="--logtostderr=true" # journal message level, 0 is debug KUBE_LOG_LEVEL="--v=0" # Should this cluster be allowed to run privileged docker containers KUBE_ALLOW_PRIV="--allow-privileged=false" # How the controller-manager, scheduler, and proxy find the apiserver KUBE_MASTER="--master=http://192.168.0.111:8080" EOF將Kubernetes 的apiserver進程的服務地址告訴kubernetes的controller-manager,scheduler,proxy進程。
Master /etc/kubernetes/apiserver 配置文件,代碼如下:
cat>/etc/kubernetes/apiserver<<EOF # kubernetes system config # The following values are used to configure the kube-apiserver # The address on the local server to listen to. KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0" # The port on the local server to listen on. KUBE_API_PORT="--port=8080" # Port minions listen on KUBELET_PORT="--kubelet-port=10250" # Comma separated list of nodes in the etcd cluster KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.0.111:2379,http://192.168.0.112:2379,http://192.168.0.113:2379" # Address range to use for i KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16" # default admission control policies #KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota" KUBE_ADMISSION_CONTROL="--admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota" # Add your own! KUBE_API_ARGS="" EOFfor i in etcd kube-apiserver kube-controller-manager kube-scheduler ;do systemctl restart $i ;systemctl enable $i ;systemctl status $i;done啟動Kubernetes Master節點上的etcd, apiserver, controller-manager和scheduler進程及狀態
4、Kubernetes Node1安裝配置
在Kubenetes Node1節點上安裝flannel、docker和Kubernetes
yum install kubernetes-node etcd docker flannel*rhsm* -y在Node1節點上配置
vim node1 /etc/etcd/etcd.conf 配置如下
cat>/etc/etcd/etcd.conf<<EOF ########## # [member] ETCD_NAME=etcd2 ETCD_DATA_DIR="/data/etcd" #ETCD_WAL_DIR="" #ETCD_SNAPSHOT_COUNT="10000" #ETCD_HEARTBEAT_INTERVAL="100" #ETCD_ELECTION_TIMEOUT="1000" ETCD_LISTEN_PEER_URLS="http://192.168.0.112:2380" ETCD_LISTEN_CLIENT_URLS="http://192.168.0.112:2379,http://127.0.0.1:2379" ETCD_MAX_SNAPSHOTS="5" #ETCD_MAX_WALS="5" #ETCD_CORS="" #[cluster] ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.0.112:2380" # if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..." ETCD_INITIAL_CLUSTER="etcd1=http://192.168.0.111:2380,etcd2=http://192.168.0.112:2380,etcd3=http://192.168.0.113:2380" #ETCD_INITIAL_CLUSTER_STATE="new" #ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_ADVERTISE_CLIENT_URLS="http://192.168.0.112:2379" #ETCD_DISCOVERY="" #ETCD_DISCOVERY_SRV="" #ETCD_DISCOVERY_FALLBACK="proxy" #ETCD_DISCOVERY_PROXY="" #[proxy] #ETCD_PROXY="off" #ETCD_PROXY_FAILURE_WAIT="5000" #ETCD_PROXY_REFRESH_INTERVAL="30000" #ETCD_PROXY_DIAL_TIMEOUT="1000" #ETCD_PROXY_WRITE_TIMEOUT="5000" #ETCD_PROXY_READ_TIMEOUT="0" # #[security] #ETCD_CERT_FILE="" #ETCD_KEY_FILE="" #ETCD_CLIENT_CERT_AUTH="false" #ETCD_TRUSTED_CA_FILE="" #ETCD_PEER_CERT_FILE="" #ETCD_PEER_KEY_FILE="" #ETCD_PEER_CLIENT_CERT_AUTH="false" #ETCD_PEER_TRUSTED_CA_FILE="" # #[logging] #ETCD_DEBUG="false" # examples for -log-package-levels etcdserver=WARNING,security=DEBUG #ETCD_LOG_PACKAGE_LEVELS="" EOF mkdir -p /data/etcd/;chmod 757 -R /data/etcd/;service etcd restart配置信息告訴flannel進程etcd服務的位置以及在etcd上網絡配置信息的節點位置。
Node1 kubernetes配置 vim 配置 /etc/kubernetes/config
cat>/etc/kubernetes/config<<EOF # kubernetes system config # The following values are used to configure various aspects of all # kubernetes services, including # kube-apiserver.service # kube-controller-manager.service # kube-scheduler.service # kubelet.service # kube-proxy.service # logging to stderr means we get it in the systemd journal KUBE_LOGTOSTDERR="--logtostderr=true" # journal message level, 0 is debug KUBE_LOG_LEVEL="--v=0" # Should this cluster be allowed to run privileged docker containers KUBE_ALLOW_PRIV="--allow-privileged=false" # How the controller-manager, scheduler, and proxy find the apiserver KUBE_MASTER="--master=http://192.168.0.111:8080" EOF配置/etc/kubernetes/kubelet代碼如下
cat>/etc/kubernetes/kubelet<<EOF ### # kubernetes kubelet (minion) config # The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces) KUBELET_ADDRESS="--address=0.0.0.0" # The port for the info server to serve on KUBELET_PORT="--port=10250" # You may leave this blank to use the actual hostname KUBELET_HOSTNAME="--hostname-override=192.168.0.112" # location of the api-server KUBELET_API_SERVER="--api-servers=http://192.168.0.111:8080" # pod infrastructure container #KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=192.168.0.123:5000/centos68" KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest" # Add your own! KUBELET_ARGS="" EOF for I in etcd kube-proxy kubelet docker ;do systemctl restart $I ;systemctl enable $I;systemctl status $I ;done iptables -P FORWARD ACCEPT分別啟動Kubernetes Node節點上kube-proxy、kubelet、docker、flanneld進程并查看其狀態
?4、在Kubernetes Node2節點上安裝flannel、docker和Kubernetes
yum install kubernetes-node etcd docker flannel *rhsm* -yNode2 節點配置Etcd配置
Node2 /etc/etcd/etcd.config 配置flannel內容如下:
cat>/etc/etcd/etcd.conf<<EOF ########## # [member] ETCD_NAME=etcd3 ETCD_DATA_DIR="/data/etcd" #ETCD_WAL_DIR="" #ETCD_SNAPSHOT_COUNT="10000" #ETCD_HEARTBEAT_INTERVAL="100" #ETCD_ELECTION_TIMEOUT="1000" ETCD_LISTEN_PEER_URLS="http://192.168.0.113:2380" ETCD_LISTEN_CLIENT_URLS="http://192.168.0.113:2379,http://127.0.0.1:2379" ETCD_MAX_SNAPSHOTS="5" #ETCD_MAX_WALS="5" #ETCD_CORS="" #[cluster] ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.0.113:2380" # if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..." ETCD_INITIAL_CLUSTER="etcd1=http://192.168.0.111:2380,etcd2=http://192.168.0.112:2380,etcd3=http://192.168.0.113:2380" #ETCD_INITIAL_CLUSTER_STATE="new" #ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_ADVERTISE_CLIENT_URLS="http://192.168.0.113:2379" #ETCD_DISCOVERY="" #ETCD_DISCOVERY_SRV="" #ETCD_DISCOVERY_FALLBACK="proxy" #ETCD_DISCOVERY_PROXY="" #[proxy] #ETCD_PROXY="off" #ETCD_PROXY_FAILURE_WAIT="5000" #ETCD_PROXY_REFRESH_INTERVAL="30000" #ETCD_PROXY_DIAL_TIMEOUT="1000" #ETCD_PROXY_WRITE_TIMEOUT="5000" #ETCD_PROXY_READ_TIMEOUT="0" # #[security] #ETCD_CERT_FILE="" #ETCD_KEY_FILE="" #ETCD_CLIENT_CERT_AUTH="false" #ETCD_TRUSTED_CA_FILE="" #ETCD_PEER_CERT_FILE="" #ETCD_PEER_KEY_FILE="" #ETCD_PEER_CLIENT_CERT_AUTH="false" #ETCD_PEER_TRUSTED_CA_FILE="" # #[logging] #ETCD_DEBUG="false" # examples for -log-package-levels etcdserver=WARNING,security=DEBUG #ETCD_LOG_PACKAGE_LEVELS="" EOF mkdir -p /data/etcd/;chmod 757 -R /data/etcd/;service etcd restartNode2 Kubernetes 配置
vim /etc/kubernete/config
cat>/etc/kubernetes/config<<EOF # kubernetes system config # The following values are used to configure various aspects of all # kubernetes services, including # kube-apiserver.service # kube-controller-manager.service # kube-scheduler.service # kubelet.service # kube-proxy.service # logging to stderr means we get it in the systemd journal KUBE_LOGTOSTDERR="--logtostderr=true" # journal message level, 0 is debug KUBE_LOG_LEVEL="--v=0" # Should this cluster be allowed to run privileged docker containers KUBE_ALLOW_PRIV="--allow-privileged=false" # How the controller-manager, scheduler, and proxy find the apiserver KUBE_MASTER="--master=http://192.168.0.111:8080" EOF配置文件/etc/kubernetes/kubelet 代碼如下
cat>/etc/kubernetes/kubelet<<EOF ### # kubernetes kubelet (minion) config # The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces) KUBELET_ADDRESS="--address=0.0.0.0" # The port for the info server to serve on KUBELET_PORT="--port=10250" # You may leave this blank to use the actual hostname KUBELET_HOSTNAME="--hostname-override=192.168.0.113" # location of the api-server KUBELET_API_SERVER="--api-servers=http://192.168.0.111:8080" # pod infrastructure container #KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=192.168.0.123:5000/centos68" KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest" # Add your own! KUBELET_ARGS="" EOF for I in etcd kube-proxy kubelet docker ;do systemctl restart $I;systemctl enable $I ;systemctl status $I ;done iptables -P FORWARD ACCEPT此時可以在Master節點上使用kubectl get nodes 查看加入到kubernetes集群的兩個Node節點:此時kubernetes集群環境搭建完成
5、Kubernetes flanneld網絡配置
Kubernetes整個集群所有的服務器(Master minion)配置Flanneld,/etc/sysconfig/flanneld 代碼如下
cat>/etc/sysconfig/flanneld<<EOF # Flanneld configuration options # etcd url location. Point this to the server where etcd runs FLANNEL_ETCD_ENDPOINTS="http://192.168.0.111:2379" # etcd config key. This is the configuration key that flannel queries # For address range assignment FLANNEL_ETCD_PREFIX="/atomic.io/network" # Any additional options that you want to pass #FLANNEL_OPTIONS="" EOF service flanneld restart在Master 服務器,測試Etcd集群是否正常,同時在Etcd配置中心創建flannel網絡配置
6、Kubernetes Dashboard UI界面
?Kubernetes實現的最重要的工作是對Docker容器集群統一的管理和調度,通常使用命令行來操作Kubernetes集群及各個節點,命令行操作非常不方便,如果使用UI界面來可視化操作,會更加方便的管理和維護。
在Node節點提前導入兩個列表鏡像?如下為配置kubernetes dashboard完整過程
1? docker load <pod-infrastructure.tgz,將導入的pod鏡像名稱修改,命令如下:
docker tag $(docker images|grep none|awk '{print $3}') registry.access.redhat.com/rhel7/pod-infrastructure
2? docker load <kubernetes-dashboard-amd64.tgz,將導入的pod鏡像名稱修改,命令如下:
docker tag $(docker images|grep none|awk '{print $3}') bestwu/kubernetes-dashboard-amd64:v1.6.3
然后在Master端,創建dashboard-controller.yaml,代碼如下
apiVersion: extensions/v1beta1 kind: Deployment metadata:name: kubernetes-dashboardnamespace: kube-systemlabels:k8s-app: kubernetes-dashboardkubernetes.io/cluster-service: "true" spec:selector:matchLabels:k8s-app: kubernetes-dashboardtemplate:metadata:labels:k8s-app: kubernetes-dashboardannotations:scheduler.alpha.kubernetes.io/critical-pod: ''scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]'spec:containers:- name: kubernetes-dashboardimage: bestwu/kubernetes-dashboard-amd64:v1.6.3resources:# keep request = limit to keep this container in guaranteed classlimits:cpu: 100mmemory: 50Mirequests:cpu: 100mmemory: 50Miports:- containerPort: 9090args:- --apiserver-host=http://192.168.0.111:8080 livenessProbe:httpGet:path: /port: 9090initialDelaySeconds: 30timeoutSeconds: 30創建dashboard-service.yaml,代碼如下:
apiVersion: v1 kind: Service metadata:name: kubernetes-dashboardnamespace: kube-systemlabels:k8s-app: kubernetes-dashboardkubernetes.io/cluster-service: "true" spec:selector:k8s-app: kubernetes-dashboardports:- port: 80targetPort: 9090創建dashboard dashborad pods模塊:
kubectl create -f dashboard-controller.yaml kubectl create -f dashboard-service.yaml創建完成后,查看Pods和Service的詳細信息:
kubectl get namespace kubectl get deployment --all-namespaces kubectl get svc --all-namespaces kubectl get pods --all-namespaces kubectl get pod -o wide --all-namespaces kubectl describe service/kubernetes-dashboard --namespace="kube-system" kubectl describe pod/kubernetes-dashboard-468712587-754dc --namespace="kube-system" kubectl delete pod/kubernetes-dashboard-468712587-754dc --namespace="kube-system"--grace-period=0 --force?
wget http://mirror.centos.org/centos/7/os/x86_64/Packages/python-rhsm-certificates-1.19.10-1.el7_4.x86_64.rpm
rpm2cpio python-rhsm-certificates-1.19.10-1.el7_4.x86_64.rpm | cpio -iv --to-stdout ./etc/rhsm/ca/redhat-uep.pem | tee /etc/rhsm/ca/redhat-uep.pem
注釋:rpm2cpio命令用于將rpm軟件包轉換為cpio格式的文件
cpio命令主要是用來建立或者還原備份檔的工具程序,cpio命令可以復制文件到歸檔包中,或者從歸檔包中復制文件。
-i 還原備份檔
-v 詳細顯示指令的執行過程
轉載于:https://www.cnblogs.com/legenidongma/p/10713409.html
總結
以上是生活随笔為你收集整理的kubernetes ui 搭建的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: SpringCloud和SprigBoo
- 下一篇: 【霜雪千年】MMD动作镜头下载