雖然網上有大量從零搭建?K8S?的文章,但大都針對老版本,若直接照搬去安裝最新的?1.20?版本會遇到一堆問題。故此將我的安裝步驟記錄下來,希望能為讀者提供?copy and paste?式的集群搭建幫助。
歡迎關注微信公眾號【廈門微思網絡】。www.xmws.cn專業IT認證培訓19周年
主要課程:思科、華為、紅帽、ORACLE、VMware、CISP、PMP等認證培訓及考證
?
?
1. 部署準備工作
部署最小化 K8S 集群:master + node1 + node2
Ubuntu?是一款基于?Debian Linux?的以桌面應用為主的操作系統,內容涵蓋文字處理、電子郵件、軟件開發工具和?Web?服務等,可供用戶免費下載、使用和分享。
??vgs
Current?machine?states:
master????????????????????running?(virtualbox)
node1?????????????????????running?(virtualbox)
node2?????????????????????running?(virtualbox)
1.1 基礎環境信息
#?hostnamectl
vagrant@k8s-master:~$?hostnamectlStatic?hostname:?k8s-master#?hosts
vagrant@k8s-master:~$?cat?/etc/hosts
127.0.0.1????????localhost
127.0.1.1????????vagrant.vm????vagrant
192.168.30.30????k8s-master
192.168.30.31????k8s-node1
192.168.30.32????k8s-node2#?ping
vagrant@k8s-master:~$?ping?k8s-node1
PING?k8s-node1?(192.168.30.31)?56(84)?bytes?of?data.
64?bytes?from?k8s-node1?(192.168.30.31):?icmp_seq=1?ttl=64?time=0.689?ms
1.2 阿里源配置
#?登錄服務器
??vgssh?master/node1/nod2
Welcome?to?Ubuntu?18.04.2?LTS?(GNU/Linux?4.15.0-50-generic?x86_64)#?設置阿里云Ubuntu鏡像
$?sudo?cp?/etc/apt/sources.list{,.bak}
$?sudo?vim?/etc/apt/sources.list#?配置kubeadm的阿里云鏡像源
$?sudo?vim?/etc/apt/sources.list
deb?https://mirrors.aliyun.com/kubernetes/apt?kubernetes-xenial?main
$?sudo?gpg?--keyserver?keyserver.ubuntu.com?--recv-keys?BA07F4FB
$?sudo?gpg?--export?--armor?BA07F4FB?|?sudo?apt-key?add?-#?配置docker安裝
$?curl?-fsSL?https://download.docker.com/linux/ubuntu/gpg?|?sudo?apt-key?add?-
$?sudo?apt-key?fingerprint?0EBFCD88
$?sudo?vim?/etc/apt/sources.list
deb?[arch=amd64]?https://download.docker.com/linux/ubuntu?bionic?stable#?更新倉庫
$?sudo?apt?update
$?sudo?apt?dist-upgrade
1.3 基礎工具安裝
-
部署階段的基礎工具安裝
-
基礎組件?docker
-
部署工具?kubeadm
-
路由規則?ipvsadm
-
時間同步?ntp
#?基礎工具安裝
$?sudo?apt?install?-y?\docker-ce?docker-ce-cli?containerd.io?\kubeadm?ipvsadm?\ntp?ntpdate?\nginx?supervisor#?將當前普通用戶加入docker組(需重新登錄)
$?sudo?usermod?-a?-G?docker?$USER#?服務啟用
$?sudo?systemctl?enable?docker.service
$?sudo?systemctl?start?docker.service
$?sudo?systemctl?enable?kubelet.service
$?sudo?systemctl?start?kubelet.service
1.4 操作系統配置
-
操作系統相關配置
-
關閉緩存
-
配置內核參數
-
調整系統時區
-
升級內核版本(默認為4.15.0的版本)
#?關閉緩存
$?sudo?swapoff?-a#?為K8S來調整內核參數
$?sudo?touch?/etc/sysctl.d/kubernetes.conf
$?sudo?cat?>?/etc/sysctl.d/kubernetes.conf?<<EOF
net.bridge.bridge-nf-call-iptables?=?1?#?開啟網橋模式(必須)
net.bridge.bridge-nf-call-ip6tables?=?1?#?開啟網橋模式(必須)
net.ipv6.conf.all.disable_ipv6?=?1?#?關閉IPv6協議(必須)
net.ipv4.ip_forward?=?1?#?轉發模式(默認開啟)
vm.panic_on_oom=0?#?開啟OOM(默認開啟)
vm.swappiness?=?0?#?禁止使用swap空間
vm.overcommit_memory=1?#?不檢查物理內存是否夠用
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max?=?52706963?#?設置文件句柄數量
fs.nr_open?=?52706963?#?設置文件的最大打開數量
net.netfilter.nf_conntrack_max?=?2310720
EOF#?查看系統內核參數的方式
$?sudo?sysctl?-a?|?grep?xxx
#?使內核參數配置文件生效
$?sudo?sysctl?-p?/etc/sysctl.d/kubernetes.conf
#?設置系統時區為中國/上海
$?sudo?timedatectl?set-timezone?Asia/Shanghai#?將當前的UTC時間寫入硬件時鐘
$?sudo?timedatectl?set-local-rtc?0
1.5 開啟 ipvs 服務
#?載入指定的個別模塊
$?modprobe?br_netfilter#?修改配置
$?cat?>?/etc/sysconfig/modules/ipvs.modules?<<EOF
#!/bin/bash
modprobe?--?ip_vs
modprobe?--?ip_vs_rr
modprobe?--?ip_vs_wrr
modprobe?--?ip_vs_sh
modprobe?--?nf_conntrack_ipv
EOF#?加載配置
$?chmod?755?/etc/sysconfig/modules/ipvs.modules?\&&?bash?/etc/sysconfig/modules/ipvs.modules?\&&?lsmod?|?grep?-e?ip_vs?-e?nf_conntrack_ipv
2. 部署 Master 節點
節點最低配置:?2C+2G?內存;從節點資源盡量充足
kubeadm?工具的?init?命令,即可初始化以單節點部署的?master。為了避免翻墻,這里可以使用阿里云的谷歌源來代替。在執行?kubeadm?部署命令的時候,指定對應地址即可。當然,可以將其加入本地的鏡像庫之中,更易維護。
-
注意事項
-
阿里云谷歌源地址
-
使用 kubeadm 定制控制平面配置
#?登錄服務器
??vgssh?master
Welcome?to?Ubuntu?18.04.2?LTS?(GNU/Linux?4.15.0-50-generic?x86_64)#?部署節點(命令行)
#?注意pod和service的地址需要不同(否則會報錯)
$?sudo?kubeadm?init?\--kubernetes-version=1.20.2?\--image-repository?registry.aliyuncs.com/google_containers?\--apiserver-advertise-address=192.168.30.30?\--pod-network-cidr=10.244.0.0/16?\--service-cidr=10.245.0.0/16#?部署鏡像配置(配置文件)
$?sudo?kubeadm?init?--config?./kubeadm-config.yaml
Your?Kubernetes?control-plane?has?initialized?successfully!
#?查看IP段是否生效(iptable)
$?ip?route?show
10.244.0.0/24?dev?cni0?proto?kernel?scope?link?src?10.244.0.1
10.244.1.0/24?via?10.244.1.0?dev?flannel.1?onlink
10.244.2.0/24?via?10.244.2.0?dev?flannel.1?onlink#?#?查看IP段是否生效(ipvs)
$?ipvsadm?-L?-n
IP?Virtual?Server?version?1.2.1?(size=4096)
Prot?LocalAddress:Port?Scheduler?Flags->?RemoteAddress:Port???????????Forward?Weight?ActiveConn?InActConn
-
配置文件定義
-
接口使用了?v1beta2?版本
-
配置主節點?IP?地址為?192.168.30.30
-
為?flannel?分配的是?10.244.0.0/16?網段
-
選擇的?kubernetes?是當前最新的?1.20.2?版本
-
加入了?controllerManager?的水平擴容功能
#?kubeadm-config.yaml
#?sudo?kubeadm?config?print?init-defaults?>?kubeadm-config.yaml
apiVersion:?kubeadm.k8s.io/v1beta2
imageRepository:?registry.aliyuncs.com/google_containers
kind:?ClusterConfiguration
kubernetesVersion:?v1.20.2
apiServer:extraArgs:advertise-address:?192.168.30.30
networking:podSubnet:?10.244.0.0/16
controllerManager:ExtraArgs:horizontal-pod-autoscaler-use-rest-clients:?"true"horizontal-pod-autoscaler-sync-period:?"10s"node-monitor-grace-period:?"10s"
#?master?setting?step?one
To?start?cluster,?you?need?to?run?the?following?as?a?regular?user:mkdir?-p?$HOME/.kubesudo?cp?-i?/etc/kubernetes/admin.conf?$HOME/.kube/configsudo?chown?$(id?-u):$(id?-g)?$HOME/.kube/configAlternatively,?if?you?are?the?root?user,?you?can?run:export?KUBECONFIG=/etc/kubernetes/admin.conf#?master?setting?step?two
You?should?now?deploy?a?pod?network?to?the?cluster.
Run?"kubectl?apply?-f?[podnetwork].yaml"?with?one?of?the?options?listed:https://kubernetes.io/docs/concepts/cluster-administration/addons/Join?any?number?of?worker?nodes?by?running?the?following?on?each?as?root:
kubeadm?join?192.168.30.30:6443?\--token?lebbdi.p9lzoy2a16tmr6hq?\--discovery-token-ca-cert-hash?\sha256:6c79fd83825d7b2b0c3bed9e10c428acf8ffcd615a1d7b258e9b500848c20cae
$?kubectl?get?nodes
NAME?????????STATUS?????ROLES??????????????????AGE???VERSION
k8s-master???NotReady???control-plane,master???62m???v1.20.2
k8s-node1????NotReady???<none>?????????????????82m???v1.20.2
k8s-node2????NotReady???<none>?????????????????82m???v1.20.2
#?查看token令牌
$?sudo?kubeadm?token?list#?生成token令牌
$?sudo?kubeadm?token?create#?忘記sha編碼
$?openssl?x509?-pubkey?-in?/etc/kubernetes/pki/ca.crt?\|?openssl?rsa?-pubin?-outform?der?2>/dev/null?\|?openssl?dgst?-sha256?-hex?|?sed?'s/^.*?//'
#?生成一個新的?token?令牌(比上面的方便)
$?kubeadm?token?generate#?直接生成?join?命令(比上面的方便)
$?kubeadm?token?create?<token_generate>?--print-join-command?--ttl=0
-
執行完成之后可以通過如下命令,查看主節點信息
-
default、kube-system、kube-public、kube-node-lease
-
coredns、etcd
-
kube-apiserver、kube-scheduler
-
kube-controller-manager、kube-controller-manager
#?命名空間
$?kubectl?get?namespace
NAME??????????????STATUS???AGE
default???????????Active???19m
kube-node-lease???Active???19m
kube-public???????Active???19m
kube-system???????Active???19m#?核心服務
$?kubectl?get?pod?-n?kube-system
NAME?????????????????????????????????READY???STATUS????RESTARTS???AGE
coredns-7f89b7bc75-bh42f?????????????1/1?????Running???0??????????19m
coredns-7f89b7bc75-dvzpl?????????????1/1?????Running???0??????????19m
etcd-k8s-master??????????????????????1/1?????Running???0??????????19m
kube-apiserver-k8s-master????????????1/1?????Running???0??????????19m
kube-controller-manager-k8s-master???1/1?????Running???0??????????19m
kube-proxy-5rlpv?????????????????????1/1?????Running???0??????????19m
kube-scheduler-k8s-master????????????1/1?????Running???0??????????19m
3. 部署 flannel 網絡
網絡服務用于管理 K8S 集群中的服務網絡
flannel?網絡需要指定?IP?地址段,即上一步中通過編排文件設置的?10.244.0.0/16。其實可以通過?flannel?官方和?HELM?工具直接部署服務,但是原地址是需要搭梯子的。所以,可以將其內容保存在如下配置文件中,修改對應鏡像地址。
#?部署flannel服務
#?1.修改鏡像地址(如果下載不了的話)
#?2.修改Network為--pod-network-cidr的參數IP段
$?kubectl?apply?-f?./kube-flannel.yml#?如果部署出現問題可通過如下命令查看日志
$?kubectl?logs?kube-flannel-ds-6xxs5?--namespace=kube-system
$?kubectl?describe?pod?kube-flannel-ds-6xxs5?--namespace=kube-system
-
如果使用當中存在問題的,可以參考官方的問題手冊
-
因為我們這里使用的是?Vagrant?虛擬出來的機器進行?K8S?的部署,但是在運行對應?yaml?配置的時候,會報錯。通過查看日志發現是因為默認綁定的是虛擬機上面的?eth0?這塊網卡,而這塊網卡是?Vagrant?使用的,我們應該綁定的是?eth1?才對。
-
Vagrant?通常為所有?VM?分配兩個接口,第一個為所有主機分配的?IP?地址為?10.0.2.15,用于獲得?NAT?的外部流量。這樣會導致?flannel?部署存在問題。通過官方問題說明,我們可以使用?--iface=eth1?這個參數選擇第二個網卡。
-
對應的參數使用方式,可以參考 flannel use –iface=eth1 中的回答自行添加,而這里我直接修改了啟動的配置文件,在啟動服務的時候通過?args?修改了,如下所示。
$?kubectl?get?pods?-n?kube-system
NAME?????????????????????????????????READY???STATUS????RESTARTS???AGE
coredns-7f89b7bc75-bh42f?????????????1/1?????Running???0??????????61m
coredns-7f89b7bc75-dvzpl?????????????1/1?????Running???0??????????61m
etcd-k8s-master??????????????????????1/1?????Running???0??????????62m
kube-apiserver-k8s-master????????????1/1?????Running???0??????????62m
kube-controller-manager-k8s-master???1/1?????Running???0??????????62m
kube-flannel-ds-zl148????????????????1/1?????Running???0??????????44s
kube-flannel-ds-ll523????????????????1/1?????Running???0??????????44s
kube-flannel-ds-wpmhw????????????????1/1?????Running???0??????????44s
kube-proxy-5rlpv?????????????????????1/1?????Running???0??????????61m
kube-scheduler-k8s-master????????????1/1?????Running???0??????????62m
---
apiVersion:?policy/v1beta1
kind:?PodSecurityPolicy
metadata:name:?psp.flannel.unprivilegedannotations:seccomp.security.alpha.kubernetes.io/allowedProfileNames:?docker/defaultseccomp.security.alpha.kubernetes.io/defaultProfileName:?docker/defaultapparmor.security.beta.kubernetes.io/allowedProfileNames:?runtime/defaultapparmor.security.beta.kubernetes.io/defaultProfileName:?runtime/default
spec:privileged:?falsevolumes:-?configMap-?secret-?emptyDir-?hostPathallowedHostPaths:-?pathPrefix:?"/etc/cni/net.d"-?pathPrefix:?"/etc/kube-flannel"-?pathPrefix:?"/run/flannel"readOnlyRootFilesystem:?false#?Users?and?groupsrunAsUser:rule:?RunAsAnysupplementalGroups:rule:?RunAsAnyfsGroup:rule:?RunAsAny#?Privilege?EscalationallowPrivilegeEscalation:?falsedefaultAllowPrivilegeEscalation:?false#?CapabilitiesallowedCapabilities:?["NET_ADMIN",?"NET_RAW"]defaultAddCapabilities:?[]requiredDropCapabilities:?[]#?Host?namespaceshostPID:?falsehostIPC:?falsehostNetwork:?truehostPorts:-?min:?0max:?65535#?SELinuxseLinux:#?SELinux?is?unused?in?CaaSPrule:?"RunAsAny"
---
kind:?ClusterRole
apiVersion:?rbac.authorization.k8s.io/v1
metadata:name:?flannel
rules:-?apiGroups:?["extensions"]resources:?["podsecuritypolicies"]verbs:?["use"]resourceNames:?["psp.flannel.unprivileged"]-?apiGroups:-?""resources:-?podsverbs:-?get-?apiGroups:-?""resources:-?nodesverbs:-?list-?watch-?apiGroups:-?""resources:-?nodes/statusverbs:-?patch
---
kind:?ClusterRoleBinding
apiVersion:?rbac.authorization.k8s.io/v1
metadata:name:?flannel
roleRef:apiGroup:?rbac.authorization.k8s.iokind:?ClusterRolename:?flannel
subjects:-?kind:?ServiceAccountname:?flannelnamespace:?kube-system
---
apiVersion:?v1
kind:?ServiceAccount
metadata:name:?flannelnamespace:?kube-system
---
kind:?ConfigMap
apiVersion:?v1
metadata:name:?kube-flannel-cfgnamespace:?kube-systemlabels:tier:?nodeapp:?flannel
data:cni-conf.json:?|{"name":?"cbr0","cniVersion":?"0.3.1","plugins":?[{"type":?"flannel","delegate":?{"hairpinMode":?true,"isDefaultGateway":?true}},{"type":?"portmap","capabilities":?{"portMappings":?true}}]}net-conf.json:?|{"Network":?"10.244.0.0/16","Backend":?{"Type":?"vxlan"}}
---
apiVersion:?apps/v1
kind:?DaemonSet
metadata:name:?kube-flannel-dsnamespace:?kube-systemlabels:tier:?nodeapp:?flannel
spec:selector:matchLabels:app:?flanneltemplate:metadata:labels:tier:?nodeapp:?flannelspec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:-?matchExpressions:-?key:?kubernetes.io/osoperator:?Invalues:-?linuxhostNetwork:?truepriorityClassName:?system-node-criticaltolerations:-?operator:?Existseffect:?NoScheduleserviceAccountName:?flannelinitContainers:-?name:?install-cniimage:?quay.io/coreos/flannel:v0.13.1-rc1command:-?cpargs:-?-f-?/etc/kube-flannel/cni-conf.json-?/etc/cni/net.d/10-flannel.conflistvolumeMounts:-?name:?cnimountPath:?/etc/cni/net.d-?name:?flannel-cfgmountPath:?/etc/kube-flannel/containers:-?name:?kube-flannelimage:?quay.io/coreos/flannel:v0.13.1-rc1command:-?/opt/bin/flanneldargs:-?--ip-masq-?--kube-subnet-mgr-?--iface=eth1resources:requests:cpu:?"100m"memory:?"50Mi"limits:cpu:?"100m"memory:?"50Mi"securityContext:privileged:?falsecapabilities:add:?["NET_ADMIN",?"NET_RAW"]env:-?name:?POD_NAMEvalueFrom:fieldRef:fieldPath:?metadata.name-?name:?POD_NAMESPACEvalueFrom:fieldRef:fieldPath:?metadata.namespacevolumeMounts:-?name:?runmountPath:?/run/flannel-?name:?flannel-cfgmountPath:?/etc/kube-flannel/volumes:-?name:?runhostPath:path:?/run/flannel-?name:?cnihostPath:path:?/etc/cni/net.d-?name:?flannel-cfgconfigMap:name:?kube-flannel-cfg
$?kubectl?get?nodes
NAME?????????STATUS???ROLES??????????????????AGE???VERSION
k8s-master???Ready????control-plane,master???62m???v1.20.2
k8s-node1????Ready????control-plane,master???82m???v1.20.2
k8s-node2????Ready????control-plane,master???82m???v1.20.2#?重啟集群
$?sudo?kubeadm?reset
$?sudo?kubeadm?init
4. 部署 dashboard 服務
以 WEB 頁面的可視化 dashboard 來監控集群的狀態
這個還是會遇到需要搭梯子下載啟動配置文件的問題,下面是對應的下載地址,可以下載之后上傳到服務器上面在進行部署。
#?部署flannel服務
$?kubectl?apply?-f?./kube-dashboard.yaml#?如果部署出現問題可通過如下命令查看日志
$?kubectl?logs?\kubernetes-dashboard-c9fb67ffc-nknpj?\--namespace=kubernetes-dashboard
$?kubectl?describe?pod?\kubernetes-dashboard-c9fb67ffc-nknpj?\--namespace=kubernetes-dashboard
$?kubectl?get?svc?-n?kubernetes-dashboard
NAME????????????????????????TYPE????????CLUSTER-IP??????EXTERNAL-IP???PORT(S)????AGE
dashboard-metrics-scraper???ClusterIP???10.245.214.11????<none>????????8000/TCP???26s
kubernetes-dashboard????????ClusterIP???10.245.161.146???<none>????????443/TCP????26s
需要注意的是?dashboard?默認不允許外網訪問,即使通過?kubectl proxy?允許外網訪問。但?dashboard?又只允許?HTTPS?訪問,這樣?kubeadm init?時自簽名的?CA?證書是不被瀏覽器承認的。
我采用的方案是?Nginx?作為反向代理,使用?Lets Encrypt?提供的有效證書對外提供服務,再經由?proxy_pass?指令反向代理到?kubectl proxy?上,如下所示。此時,本地可經由?8888?訪問到?dashboard?服務,再通過?Nginx?訪問它。
#?代理(可以使用supervisor)
$?kubectl?proxy?--accept-hosts='^*$'
$?kubectl?proxy?--port=8888?--accept-hosts='^*$'
#?測試代理是否正常(默認監聽在8001端口上)
$?curl?-X?GET?-L?http://localhost:8001#?本地(可以使用nginx)
proxy_pass?http://localhost:8001;
proxy_pass?http://localhost:8888;#?外網訪問如下URL地址
https://mydomain/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/login
# k8s.confclient_max_body_size 80M;
client_body_buffer_size 128k;
proxy_connect_timeout 600;
proxy_read_timeout 600;
proxy_send_timeout 600;server {listen 8080 ssl;server_name _;ssl_certificate /etc/kubernetes/pki/ca.crt;ssl_certificate_key /etc/kubernetes/pki/ca.key;access_log /var/log/nginx/k8s.access.log;error_log /var/log/nginx/k8s.error.log error;location / {proxy_set_header X-Forwarded-Proto $scheme;proxy_set_header Host $http_host;proxy_set_header X-Real-IP $remote_addr;proxy_pass http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/;}
}
#?k8s.conf[program:k8s-master]
command=kubectl?proxy?--accept-hosts='^*$'
user=vagrant
environment=KUBECONFIG="/home/vagrant/.kube/config"
stopasgroup=true
stopasgroup=true
autostart=true
autorestart=unexpected
stdout_logfile_maxbytes=1MB
stdout_logfile_backups=10
stderr_logfile_maxbytes=1MB
stderr_logfile_backups=10
stderr_logfile=/var/log/supervisor/k8s-stderr.log
stdout_logfile=/var/log/supervisor/k8s-stdout.log
#?Copyright?2017?The?Kubernetes?Authors.
#
#?Licensed?under?the?Apache?License,?Version?2.0?(the?"License");
#?you?may?not?use?this?file?except?in?compliance?with?the?License.
#?You?may?obtain?a?copy?of?the?License?at
#
#?????http://www.apache.org/licenses/LICENSE-2.0
#
#?Unless?required?by?applicable?law?or?agreed?to?in?writing,?software
#?distributed?under?the?License?is?distributed?on?an?"AS?IS"?BASIS,
#?WITHOUT?WARRANTIES?OR?CONDITIONS?OF?ANY?KIND,?either?express?or?implied.
#?See?the?License?for?the?specific?language?governing?permissions?and
#?limitations?under?the?License.apiVersion:?v1
kind:?Namespace
metadata:name:?kubernetes-dashboard---
apiVersion:?v1
kind:?ServiceAccount
metadata:labels:k8s-app:?kubernetes-dashboardname:?kubernetes-dashboardnamespace:?kubernetes-dashboard---
kind:?Service
apiVersion:?v1
metadata:labels:k8s-app:?kubernetes-dashboardname:?kubernetes-dashboardnamespace:?kubernetes-dashboard
spec:ports:-?port:?443targetPort:?8443selector:k8s-app:?kubernetes-dashboard---
apiVersion:?v1
kind:?Secret
metadata:labels:k8s-app:?kubernetes-dashboardname:?kubernetes-dashboard-certsnamespace:?kubernetes-dashboard
type:?Opaque---
apiVersion:?v1
kind:?Secret
metadata:labels:k8s-app:?kubernetes-dashboardname:?kubernetes-dashboard-csrfnamespace:?kubernetes-dashboard
type:?Opaque
data:csrf:?""---
apiVersion:?v1
kind:?Secret
metadata:labels:k8s-app:?kubernetes-dashboardname:?kubernetes-dashboard-key-holdernamespace:?kubernetes-dashboard
type:?Opaque---
kind:?ConfigMap
apiVersion:?v1
metadata:labels:k8s-app:?kubernetes-dashboardname:?kubernetes-dashboard-settingsnamespace:?kubernetes-dashboard---
kind:?Role
apiVersion:?rbac.authorization.k8s.io/v1
metadata:labels:k8s-app:?kubernetes-dashboardname:?kubernetes-dashboardnamespace:?kubernetes-dashboard
rules:#?Allow?Dashboard?to?get,?update?and?delete?Dashboard?exclusive?secrets.-?apiGroups:?[""]resources:?["secrets"]resourceNames:["kubernetes-dashboard-key-holder","kubernetes-dashboard-certs","kubernetes-dashboard-csrf",]verbs:?["get",?"update",?"delete"]#?Allow?Dashboard?to?get?and?update?'kubernetes-dashboard-settings'?config?map.-?apiGroups:?[""]resources:?["configmaps"]resourceNames:?["kubernetes-dashboard-settings"]verbs:?["get",?"update"]#?Allow?Dashboard?to?get?metrics.-?apiGroups:?[""]resources:?["services"]resourceNames:?["heapster",?"dashboard-metrics-scraper"]verbs:?["proxy"]-?apiGroups:?[""]resources:?["services/proxy"]resourceNames:["heapster","http:heapster:","https:heapster:","dashboard-metrics-scraper","http:dashboard-metrics-scraper",]verbs:?["get"]---
kind:?ClusterRole
apiVersion:?rbac.authorization.k8s.io/v1
metadata:labels:k8s-app:?kubernetes-dashboardname:?kubernetes-dashboard
rules:#?Allow?Metrics?Scraper?to?get?metrics?from?the?Metrics?server-?apiGroups:?["metrics.k8s.io"]resources:?["pods",?"nodes"]verbs:?["get",?"list",?"watch"]---
apiVersion:?rbac.authorization.k8s.io/v1
kind:?RoleBinding
metadata:labels:k8s-app:?kubernetes-dashboardname:?kubernetes-dashboardnamespace:?kubernetes-dashboard
roleRef:apiGroup:?rbac.authorization.k8s.iokind:?Rolename:?kubernetes-dashboard
subjects:-?kind:?ServiceAccountname:?kubernetes-dashboardnamespace:?kubernetes-dashboard---
apiVersion:?rbac.authorization.k8s.io/v1
kind:?ClusterRoleBinding
metadata:name:?kubernetes-dashboard
roleRef:apiGroup:?rbac.authorization.k8s.iokind:?ClusterRolename:?kubernetes-dashboard
subjects:-?kind:?ServiceAccountname:?kubernetes-dashboardnamespace:?kubernetes-dashboard---
kind:?Deployment
apiVersion:?apps/v1
metadata:labels:k8s-app:?kubernetes-dashboardname:?kubernetes-dashboardnamespace:?kubernetes-dashboard
spec:replicas:?1revisionHistoryLimit:?10selector:matchLabels:k8s-app:?kubernetes-dashboardtemplate:metadata:labels:k8s-app:?kubernetes-dashboardspec:containers:-?name:?kubernetes-dashboardimage:?registry.cn-shanghai.aliyuncs.com/jieee/dashboard:v2.0.4imagePullPolicy:?Alwaysports:-?containerPort:?8443protocol:?TCPargs:-?--auto-generate-certificates-?--namespace=kubernetes-dashboard#?Uncomment?the?following?line?to?manually?specify?Kubernetes?API?server?Host#?If?not?specified,?Dashboard?will?attempt?to?auto?discover?the?API?server?and?connect#?to?it.?Uncomment?only?if?the?default?does?not?work.#?-?--apiserver-host=http://my-address:portvolumeMounts:-?name:?kubernetes-dashboard-certsmountPath:?/certs#?Create?on-disk?volume?to?store?exec?logs-?mountPath:?/tmpname:?tmp-volumelivenessProbe:httpGet:scheme:?HTTPSpath:?/port:?8443initialDelaySeconds:?30timeoutSeconds:?30securityContext:allowPrivilegeEscalation:?falsereadOnlyRootFilesystem:?truerunAsUser:?1001runAsGroup:?2001volumes:-?name:?kubernetes-dashboard-certssecret:secretName:?kubernetes-dashboard-certs-?name:?tmp-volumeemptyDir:?{}serviceAccountName:?kubernetes-dashboardnodeSelector:"kubernetes.io/os":?linux#?Comment?the?following?tolerations?if?Dashboard?must?not?be?deployed?on?mastertolerations:-?key:?node-role.kubernetes.io/mastereffect:?NoSchedule---
kind:?Service
apiVersion:?v1
metadata:labels:k8s-app:?dashboard-metrics-scrapername:?dashboard-metrics-scrapernamespace:?kubernetes-dashboard
spec:ports:-?port:?8000targetPort:?8000selector:k8s-app:?dashboard-metrics-scraper---
kind:?Deployment
apiVersion:?apps/v1
metadata:labels:k8s-app:?dashboard-metrics-scrapername:?dashboard-metrics-scrapernamespace:?kubernetes-dashboard
spec:replicas:?1revisionHistoryLimit:?10selector:matchLabels:k8s-app:?dashboard-metrics-scrapertemplate:metadata:labels:k8s-app:?dashboard-metrics-scraperannotations:seccomp.security.alpha.kubernetes.io/pod:?"runtime/default"spec:containers:-?name:?dashboard-metrics-scraperimage:?registry.cn-shanghai.aliyuncs.com/jieee/metrics-scraper:v1.0.4ports:-?containerPort:?8000protocol:?TCPlivenessProbe:httpGet:scheme:?HTTPpath:?/port:?8000initialDelaySeconds:?30timeoutSeconds:?30volumeMounts:-?mountPath:?/tmpname:?tmp-volumesecurityContext:allowPrivilegeEscalation:?falsereadOnlyRootFilesystem:?truerunAsUser:?1001runAsGroup:?2001serviceAccountName:?kubernetes-dashboardnodeSelector:"kubernetes.io/os":?linux#?Comment?the?following?tolerations?if?Dashboard?must?not?be?deployed?on?mastertolerations:-?key:?node-role.kubernetes.io/mastereffect:?NoSchedulevolumes:-?name:?tmp-volumeemptyDir:?{}
#?創建管理員帳戶(dashboard)
$?cat?<<EOF?|?kubectl?apply?-f?-
apiVersion:?v1
kind:?ServiceAccount
metadata:name:?admin-usernamespace:?kubernetes-dashboard
EOF
#?將用戶綁定已經存在的集群管理員角色
$?cat?<<EOF?|?kubectl?apply?-f?-
apiVersion:?rbac.authorization.k8s.io/v1
kind:?ClusterRoleBinding
metadata:name:?admin-user
roleRef:apiGroup:?rbac.authorization.k8s.iokind:?ClusterRolename:?cluster-admin
subjects:
-?kind:?ServiceAccountname:?admin-usernamespace:?kubernetes-dashboard
EOF
#?獲取可用戶于訪問的token令牌
$?kubectl?-n?kubernetes-dashboard?describe?secret?\$(kubectl?-n?kubernetes-dashboard?get?secret?\|?grep?admin-user?|?awk?'{print?$1}')
#?創建serviceaccount
$?kubectl?create?serviceaccount?dashboard-admin?-n?kube-system#?把serviceaccount綁定在clusteradmin
#?授權serviceaccount用戶具有整個集群的訪問管理權限
$?kubectl?create?clusterrolebinding?\dashboard-cluster-admin?--clusterrole=cluster-admin?\--serviceaccount=kube-system:dashboard-admin#?獲取serviceaccount的secret信息,可得到token令牌的信息
$?kubectl?get?secret?-n?kube-system#?通過上邊命令獲取到dashboard-admin-token-slfcr信息
$?kubectl?describe?secret?<dashboard-admin-token-slfcr>?-n?kube-system#?瀏覽器訪問登錄并把token粘貼進去登錄即可
https://192.168.30.30:8080/#?快捷查看token的命令
$?kubectl?describe?secrets?-n?kube-system?\$(kubectl?-n?kube-system?get?secret?|?awk?'/admin/{print?$1}')
總結
以上是生活随笔為你收集整理的手把手教你部署一个最小化的 Kubernetes 集群的全部內容,希望文章能夠幫你解決所遇到的問題。
如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。