Kubernetes进阶使用(二)
一、使用configmap為nginx提供一個配置文件并驗證
configMap 卷 提供了向 Pod 注入配置數(shù)據(jù)的方法。 ConfigMap 對象中存儲的數(shù)據(jù)可以被 configMap 類型的卷引用,然后被 Pod 中運行的 容器化應(yīng)用使用。
引用 configMap 對象時,你可以在 volume 中通過它的名稱來引用。 你可以自定義 ConfigMap 中特定條目所要使用的路徑。
說明:
- 在使用 ConfigMap 之前你首先要創(chuàng)建它。
- 容器以 subPath 卷掛載方式使用 ConfigMap 時,將無法接收 ConfigMap 的更新。
- 文本數(shù)據(jù)掛載成文件時采用 UTF-8 字符編碼。如果使用其他字符編碼形式,可使用 binaryData 字段。
進行網(wǎng)站的驗證
root@master1:~/yaml/20210925/case-yaml/case6# kubectl exec -it deploy/nginx-deployment -- bash root@nginx-deployment-6b86dd48c8-hdw2x:/# cd /data/nginx/html/ root@nginx-deployment-6b86dd48c8-hdw2x:/data/nginx/html# echo "ceshi"> index.html二、PV/和PVC的特性
2.1 PV的使用
PV作為存儲資源,主要包括存儲能力、訪問模式、存儲類型、回收策略、后端存儲類型等關(guān)鍵信息的設(shè)置。是對底層網(wǎng)絡(luò)存儲的抽象,即將網(wǎng)咯存儲資源,一個整體的存儲資源拆分成多份后給不同的業(yè)務(wù)使用。
#nfs的話要先在對應(yīng)的主機上面運行nfs-server,配置好 root@node1:/data/svc# mkdir /data/k8sdata/test/redis-datadir -p root@node1:/data/svc# apt install -y nfs-kernel-server root@node1:~# vim /etc/exports /data/svc 10.0.0.0/24(rw,no_root_squash,sync) root@node1:~# systemctl restart nfs-kernel-server.service root@node1:/data/svc# systemctl enable nfs-kernel-server.service Synchronizing state of nfs-kernel-server.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable nfs-kernel-server #運行pv root@master1:~/yaml# vim redis-pv.yaml apiVersion: v1 kind: PersistentVolume metadata:name: redis-data-pvnamespace: test spec:capacity: #當(dāng)前pv大小storage: 10GiaccessModes: #訪問模式- ReadWriteOncenfs:path: /data/k8sdata/test/redis-datadir server: 10.0.0.1112.2 PVC的使用
PVC作為用戶對存儲資源的需求申請,主要包括存儲空間請求、訪問模式、PV選擇條件和存儲類別等信息的設(shè)置。是對PV資源的申請調(diào)用,就像POD消費node節(jié)點一樣,pod是通過pvc將數(shù)據(jù)保存至pv,pv再保存至存儲中。
root@master1:~/yaml# vim redis-pvc.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata:name: redis-data-pvcnamespace: test spec:volumeName: redis-data-pv #要綁定的PV名稱accessModes:- ReadWriteOnceresources:requests:storage: 10Giroot@master1:~/yaml# kubectl apply -f redis-pvc.yaml #綁定之后狀態(tài)會從 Available變成Bound root@master1:~/yaml# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE redis-data-pv 10Gi RWO Retain Bound test/redis-data-pvc 33m三、k8s實戰(zhàn)案例
3.1 k8s實戰(zhàn)案例之zookeeper集群:
基于PV與PVC作為后端存儲,實現(xiàn)zookeeper集群
3.1.1 構(gòu)建zookeeper鏡像
zookeeper鏡像下載地址: http://archive.apache.org/dist/zookeeper/
root@master1:~/Dockerfile/zookeeper# wget https://archive.apache.org/dist/zookeeper/zookeeper-3.4.14/zookeeper-3.4.14.tar.gz root@master1:~/Dockerfile/zookeeper# wget https://archive.apache.org/dist/zookeeper/zookeeper-3.4.14/zookeeper-3.4.14.tar.gz.asc root@master1:~/yaml# docker pull elevy/slim_java:8 root@master1:~# docker tag elevy/slim_java:8 10.0.0.104/baseimages/slim_java:8 root@master1:~# docker push 10.0.0.104/baseimages/slim_java:8#準(zhǔn)備文件 root@master1:~/Dockerfile/zookeeper# ll total 36852 drwxr-xr-x 2 root root 4096 Nov 19 14:31 ./ drwxr-xr-x 8 root root 4096 Nov 19 13:44 ../ -rwxr-xr-x 1 root root 123 Nov 19 14:24 build.sh* -rw-r--r-- 1 root root 1562 Nov 19 14:23 Dockerfile -rwxr-xr-x 1 root root 309 Nov 19 13:51 entrypoint.sh* -rw-r--r-- 1 root root 5637 Nov 19 14:31 KEYS -rw-r--r-- 1 root root 1852 Nov 19 13:53 log4j.properties -rw-r--r-- 1 root root 5637 Nov 19 14:31 public-file.key -rw-r--r-- 1 root root 91 Nov 19 13:51 repositories -rwxr-xr-x 1 root root 102 Nov 19 13:52 zkReady.sh* -rw-r--r-- 1 root root 507 Nov 19 13:52 zoo.cfg -rw-r--r-- 1 root root 37676320 Nov 19 13:54 zookeeper-3.4.14.tar.gz -rw-r--r-- 1 root root 836 Apr 1 2019 zookeeper-3.4.14.tar.gz.ascroot@master1:~/Dockerfile/zookeeper# cat entrypoint.sh #!/bin/bash echo ${MYID:-1} > /zookeeper/data/myid if [ -n "$SERVERS" ]; thenIFS=\, read -a servers <<<"$SERVERS"for i in "${!servers[@]}"; do printf "\nserver.%i=%s:2888:3888" "$((1 + $i))" "${servers[$i]}" >> /zookeeper/conf/zoo.cfgdone fi cd /zookeeper exec "$@"root@master1:~/Dockerfile/zookeeper# cat zkReady.sh #!/bin/bash /zookeeper/bin/zkServer.sh status | egrep 'Mode: (standalone|leading|following|observing)'root@master1:~/Dockerfile/zookeeper# cat repositories http://mirrors.aliyun.com/alpine/v3.6/main http://mirrors.aliyun.com/alpine/v3.6/communityroot@master1:~/Dockerfile/zookeeper# cat log4j.properties # # ZooKeeper Logging Configuration # # Format is "<default threshold> (, <appender>)+ # DEFAULT: console appender only log4j.rootLogger=${zookeeper.root.logger} # Example with rolling log file #log4j.rootLogger=DEBUG, CONSOLE, ROLLINGFILE # Example with rolling log file and tracing #log4j.rootLogger=TRACE, CONSOLE, ROLLINGFILE, TRACEFILE # # Log INFO level and above messages to the console # log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender log4j.appender.CONSOLE.Threshold=${zookeeper.console.threshold} log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout log4j.appender.CONSOLE.layout.ConversionPattern=%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L] - %m%n # # Add ROLLINGFILE to rootLogger to get log file output # Log DEBUG level and above messages to a log file log4j.appender.ROLLINGFILE=org.apache.log4j.RollingFileAppender log4j.appender.ROLLINGFILE.Threshold=${zookeeper.log.threshold} log4j.appender.ROLLINGFILE.File=${zookeeper.log.dir}/${zookeeper.log.file} # Max log file size of 10MB log4j.appender.ROLLINGFILE.MaxFileSize=10MB # uncomment the next line to limit number of backup files log4j.appender.ROLLINGFILE.MaxBackupIndex=5 log4j.appender.ROLLINGFILE.layout=org.apache.log4j.PatternLayout log4j.appender.ROLLINGFILE.layout.ConversionPattern=%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L] - %m%n # # Add TRACEFILE to rootLogger to get log file output # Log DEBUG level and above messages to a log file log4j.appender.TRACEFILE=org.apache.log4j.FileAppender log4j.appender.TRACEFILE.Threshold=TRACE log4j.appender.TRACEFILE.File=${zookeeper.tracelog.dir}/${zookeeper.tracelog.file} log4j.appender.TRACEFILE.layout=org.apache.log4j.PatternLayout ### Notice we are including log4j's NDC here (%x) log4j.appender.TRACEFILE.layout.ConversionPattern=%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L][%x] - %m%n#KEYS的獲取,沒有KEY會報錯 進行查看 root@master1:~/Dockerfile/zookeeper# gpg --verify zookeeper-3.4.14.tar.gz.asc gpg: assuming signed data in 'zookeeper-3.4.14.tar.gz' gpg: Signature made Wed 06 Mar 2019 06:47:14 PM UTC gpg: using RSA key FFE35B7F15DFA1BA gpg: Can't check signature: No public key #下載公鑰 root@master1:~/Dockerfile/zookeeper# gpg --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys FFE35B7F15DFA1BA gpg: key FFE35B7F15DFA1BA: public key "Andor Molnar <andor@apache.org>" imported gpg: Total number processed: 1 gpg: imported: 1 root@master1:~/Dockerfile/zookeeper# gpg --verify zookeeper-3.4.14.tar.gz.asc gpg: assuming signed data in 'zookeeper-3.4.14.tar.gz' gpg: Signature made Wed 06 Mar 2019 06:47:14 PM UTC gpg: using RSA key FFE35B7F15DFA1BA gpg: Good signature from "Andor Molnar <andor@apache.org>" [unknown] gpg: aka "Andor Molnar <andor@cloudera.com>" [unknown] gpg: aka "Andor Molnár <andor@apache.org>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: 3F7A 1D16 FA42 17B1 DC75 E1C9 FFE3 5B7F 15DF A1BA #查看所有key root@master1:~/Dockerfile/zookeeper# gpg -k /root/.gnupg/pubring.kbx ------------------------ pub rsa4096 2018-06-19 [SC]3F7A1D16FA4217B1DC75E1C9FFE35B7F15DFA1BA uid [ unknown] Andor Molnar <andor@apache.org> uid [ unknown] Andor Molnar <andor@cloudera.com> uid [ unknown] Andor Molnár <andor@apache.org> sub rsa4096 2018-06-19 [E] #進行導(dǎo)出公鑰 root@master1:~/Dockerfile/zookeeper# gpg -a -o KEYS --export 3F7A1D16FA4217B1DC75E1C9FFE35B7F15DFA1BAroot@master1:~/Dockerfile/zookeeper# cat Dockerfile FROM 10.0.0.104/baseimages/slim_java:8 ENV ZK_VERSION 3.4.14 ADD repositories /etc/apk/repositories COPY zookeeper-3.4.14.tar.gz /tmp/zk.tgz COPY zookeeper-3.4.14.tar.gz.asc /tmp/zk.tgz.asc COPY KEYS /tmp/KEYS RUN apk add --no-cache --virtual .build-deps \ca-certificates \gnupg \tar \wget && \apk add --no-cache \bash && \export GNUPGHOME="$(mktemp -d)" && \gpg -q --batch --import /tmp/KEYS && \gpg -q --batch --no-auto-key-retrieve --verify /tmp/zk.tgz.asc /tmp/zk.tgz && \mkdir -p /zookeeper/data /zookeeper/wal /zookeeper/log && \tar -x -C /zookeeper --strip-components=1 --no-same-owner -f /tmp/zk.tgz && \cd /zookeeper && \cp dist-maven/zookeeper-${ZK_VERSION}.jar . && \rm -rf \*.txt \*.xml \bin/README.txt \bin/*.cmd \conf/* \contrib \dist-maven \docs \lib/*.txt \lib/cobertura \lib/jdiff \recipes \src \zookeeper-*.asc \zookeeper-*.md5 \zookeeper-*.sha1 && \apk del .build-deps && \rm -rf /tmp/* "$GNUPGHOME" COPY zoo.cfg /zookeeper/conf/zoo.cfg COPY log4j.properties /zookeeper/conf/log4j.properties COPY zkReady.sh /zookeeper/bin/zkReady.sh COPY entrypoint.sh / ENV PATH=/zookeeper/bin:${PATH} \ZOO_LOG_DIR=/zookeeper/log \ZOO_LOG4J_PROP="INFO, CONSOLE, ROLLINGFILE" \JMXPORT=9010 ENTRYPOINT [ "/entrypoint.sh" ] CMD [ "zkServer.sh", "start-foreground" ] EXPOSE 2181 2888 3888 9010root@master1:~/Dockerfile/zookeeper# cat build.sh #!/bin/bash imagename="10.0.0.104/baseimages/zookeeper:v1" docker build -t "$imagename" . sleep 1 docker push "$imagename"#開始編譯生成鏡像,進行上傳harbor root@master1:~/Dockerfile/zookeeper# sh build.sh3.1.2 測試zookeeper鏡像
root@master1:~/Dockerfile/zookeeper# docker run -it --rm 10.0.0.104/baseimages/zookeeper:v1 ZooKeeper JMX enabled by default ZooKeeper remote JMX Port set to 9010 ZooKeeper remote JMX authenticate set to false ZooKeeper remote JMX ssl set to false ZooKeeper remote JMX log4j set to true Using config: /zookeeper/bin/../conf/zoo.cfg log4j:ERROR setFile(null,true) call failed. java.io.FileNotFoundException: /zookeeper/log (Is a directory)at java.io.FileOutputStream.open0(Native Method)at java.io.FileOutputStream.open(FileOutputStream.java:270)at java.io.FileOutputStream.<init>(FileOutputStream.java:213)at java.io.FileOutputStream.<init>(FileOutputStream.java:133)at org.apache.log4j.FileAppender.setFile(FileAppender.java:294)at org.apache.log4j.RollingFileAppender.setFile(RollingFileAppender.java:207)at org.apache.log4j.FileAppender.activateOptions(FileAppender.java:165)at org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:307)at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:172)at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:104)at org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:842)at org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:768)at org.apache.log4j.PropertyConfigurator.configureRootCategory(PropertyConfigurator.java:648)at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:514)at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:580)at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526)at org.apache.log4j.LogManager.<clinit>(LogManager.java:127)at org.slf4j.impl.Log4jLoggerFactory.<init>(Log4jLoggerFactory.java:66)at org.slf4j.impl.StaticLoggerBinder.<init>(StaticLoggerBinder.java:72)at org.slf4j.impl.StaticLoggerBinder.<clinit>(StaticLoggerBinder.java:45)at org.slf4j.LoggerFactory.bind(LoggerFactory.java:150)at org.slf4j.LoggerFactory.performInitialization(LoggerFactory.java:124)at org.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java:412)at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:357)at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:383)at org.apache.zookeeper.server.quorum.QuorumPeerMain.<clinit>(QuorumPeerMain.java:67) 2021-11-19 14:45:57,568 [myid:] - INFO [main:QuorumPeerConfig@136] - Reading configuration from: /zookeeper/bin/../conf/zoo.cfg 2021-11-19 14:45:57,576 [myid:] - INFO [main:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3 2021-11-19 14:45:57,576 [myid:] - INFO [main:DatadirCleanupManager@79] - autopurge.purgeInterval set to 1 2021-11-19 14:45:57,580 [myid:] - WARN [main:QuorumPeerMain@116] - Either no config or no quorum defined in config, running in standalone mode 2021-11-19 14:45:57,598 [myid:] - INFO [PurgeTask:DatadirCleanupManager$PurgeTask@138] - Purge task started. 2021-11-19 14:45:57,602 [myid:] - INFO [main:QuorumPeerConfig@136] - Reading configuration from: /zookeeper/bin/../conf/zoo.cfg 2021-11-19 14:45:57,604 [myid:] - INFO [main:ZooKeeperServerMain@98] - Starting server 2021-11-19 14:45:57,626 [myid:] - INFO [main:Environment@100] - Server environment:zookeeper.version=3.4.14-4c25d480e66aadd371de8bd2fd8da255ac140bcf, built on 03/06/2019 16:18 GMT 2021-11-19 14:45:57,627 [myid:] - INFO [main:Environment@100] - Server environment:host.name=060cc2930585 2021-11-19 14:45:57,628 [myid:] - INFO [main:Environment@100] - Server environment:java.version=1.8.0_144 2021-11-19 14:45:57,628 [myid:] - INFO [main:Environment@100] - Server environment:java.vendor=Oracle Corporation 2021-11-19 14:45:57,628 [myid:] - INFO [main:Environment@100] - Server environment:java.home=/usr/lib/jvm/java-8-oracle 2021-11-19 14:45:57,628 [myid:] - INFO [main:Environment@100] - Server environment:java.class.path=/zookeeper/bin/../zookeeper-server/target/classes:/zookeeper/bin/../build/classes:/zookeeper/bin/../zookeeper-server/target/lib/*.jar:/zookeeper/bin/../build/lib/*.jar:/zookeeper/bin/../lib/slf4j-log4j12-1.7.25.jar:/zookeeper/bin/../lib/slf4j-api-1.7.25.jar:/zookeeper/bin/../lib/netty-3.10.6.Final.jar:/zookeeper/bin/../lib/log4j-1.2.17.jar:/zookeeper/bin/../lib/jline-0.9.94.jar:/zookeeper/bin/../lib/audience-annotations-0.5.0.jar:/zookeeper/bin/../zookeeper-3.4.14.jar:/zookeeper/bin/../zookeeper-server/src/main/resources/lib/*.jar:/zookeeper/bin/../conf: 2021-11-19 14:45:57,628 [myid:] - INFO [main:Environment@100] - Server environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib 2021-11-19 14:45:57,629 [myid:] - INFO [main:Environment@100] - Server environment:java.io.tmpdir=/tmp 2021-11-19 14:45:57,629 [myid:] - INFO [main:Environment@100] - Server environment:java.compiler=<NA> 2021-11-19 14:45:57,634 [myid:] - INFO [main:Environment@100] - Server environment:os.name=Linux 2021-11-19 14:45:57,634 [myid:] - INFO [main:Environment@100] - Server environment:os.arch=amd64 2021-11-19 14:45:57,634 [myid:] - INFO [main:Environment@100] - Server environment:os.version=5.4.0-90-generic 2021-11-19 14:45:57,634 [myid:] - INFO [main:Environment@100] - Server environment:user.name=root 2021-11-19 14:45:57,635 [myid:] - INFO [main:Environment@100] - Server environment:user.home=/root 2021-11-19 14:45:57,635 [myid:] - INFO [main:Environment@100] - Server environment:user.dir=/zookeeper 2021-11-19 14:45:57,627 [myid:] - INFO [PurgeTask:DatadirCleanupManager$PurgeTask@144] - Purge task completed. 2021-11-19 14:45:57,648 [myid:] - INFO [main:ZooKeeperServer@836] - tickTime set to 2000 2021-11-19 14:45:57,648 [myid:] - INFO [main:ZooKeeperServer@845] - minSessionTimeout set to -1 2021-11-19 14:45:57,649 [myid:] - INFO [main:ZooKeeperServer@854] - maxSessionTimeout set to -1 2021-11-19 14:45:57,668 [myid:] - INFO [main:ServerCnxnFactory@117] - Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory 2021-11-19 14:45:57,681 [myid:] - INFO [main:NIOServerCnxnFactory@89] - binding to port 0.0.0.0/0.0.0.0:2181成功,沒什么報錯
3.1.3 創(chuàng)建pv和pvc
要提前在nfs-server機器上面創(chuàng)建zookeeper目錄
root@haproxy1:~# mkdir /data/k8sdata/zookeeper-data-{1..3}進行pv和pvc的創(chuàng)建
root@master1:~/yaml/zookeeper# vim zk-pv.yaml apiVersion: v1 kind: PersistentVolume metadata:name: zookeeper-datadir-pv-1 spec:capacity:storage: 2GiaccessModes:- ReadWriteOncenfs:server: 10.0.0.109path: /data/k8sdata/zookeeper-data-1--- apiVersion: v1 kind: PersistentVolume metadata:name: zookeeper-datadir-pv-2 spec:capacity:storage: 2GiaccessModes:- ReadWriteOncenfs:server: 10.0.0.109path: /data/k8sdata/zookeeper-data-1--- apiVersion: v1 kind: PersistentVolume metadata:name: zookeeper-datadir-pv-3 spec:capacity:storage: 2GiaccessModes:- ReadWriteOncenfs:server: 10.0.0.109path: /data/k8sdata/zookeeper-data-1root@master1:~/yaml/zookeeper# vim zk-pvc.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata:name: zookeeper-datadir-pvc-1 spec:accessModes:- ReadWriteOncevolumeName: zookeeper-datadir-pv-1resources:requests:storage: 2Gi --- apiVersion: v1 kind: PersistentVolumeClaim metadata:name: zookeeper-datadir-pvc-2 spec:accessModes:- ReadWriteOncevolumeName: zookeeper-datadir-pv-2resources:requests:storage: 2Gi --- apiVersion: v1 kind: PersistentVolumeClaim metadata:name: zookeeper-datadir-pvc-3 spec:accessModes:- ReadWriteOncevolumeName: zookeeper-datadir-pv-3resources:requests:storage: 2Giroot@master1:~/yaml/zookeeper# kubectl apply -f zk-pv.yaml persistentvolume/zookeeper-datadir-pv-1 created persistentvolume/zookeeper-datadir-pv-2 created persistentvolume/zookeeper-datadir-pv-3 created root@master1:~/yaml/zookeeper# kubectl apply -f zk-pvc.yaml persistentvolumeclaim/zookeeper-datadir-pvc-1 created persistentvolumeclaim/zookeeper-datadir-pvc-2 created persistentvolumeclaim/zookeeper-datadir-pvc-3 createdroot@master1:~/yaml/zookeeper# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE zookeeper-datadir-pvc-1 Bound zookeeper-datadir-pv-1 2Gi RWO 90s zookeeper-datadir-pvc-2 Bound zookeeper-datadir-pv-2 2Gi RWO 89s zookeeper-datadir-pvc-3 Bound zookeeper-datadir-pv-3 2Gi RWO 89s3.1.4 創(chuàng)建zookeeper集群
root@master1:~/yaml/zookeeper# vim zk-deploy.yaml kind: Deployment apiVersion: apps/v1 metadata:name: zookeeper1 spec:replicas: 1selector:matchLabels:app: zookeepertemplate:metadata:labels:app: zookeeperserver-id: "1"spec:containers:- name: serverimage: 10.0.0.104/baseimages/zookeeper:v1imagePullPolicy: Alwaysenv:- name: MYIDvalue: "1"- name: SERVERSvalue: "zookeeper1,zookeeper2,zookeeper3"- name: JVMFLAGSvalue: "-Xmx2G"ports:- containerPort: 2181- containerPort: 2888- containerPort: 3888volumeMounts:- mountPath: "/zookeeper/data"name: zookeeper-datadir-pvc-1 volumes:- name: zookeeper-datadir-pvc-1 persistentVolumeClaim:claimName: zookeeper-datadir-pvc-1 --- kind: Deployment apiVersion: apps/v1 metadata:name: zookeeper2 spec:replicas: 1selector:matchLabels:app: zookeepertemplate:metadata:labels:app: zookeeperserver-id: "2"spec:containers:- name: serverimage: 10.0.0.104/baseimages/zookeeper:v1 imagePullPolicy: Alwaysenv:- name: MYIDvalue: "2"- name: SERVERSvalue: "zookeeper1,zookeeper2,zookeeper3"- name: JVMFLAGSvalue: "-Xmx2G"ports:- containerPort: 2181- containerPort: 2888- containerPort: 3888volumeMounts:- mountPath: "/zookeeper/data"name: zookeeper-datadir-pvc-2 volumes:- name: zookeeper-datadir-pvc-2persistentVolumeClaim:claimName: zookeeper-datadir-pvc-2 --- kind: Deployment apiVersion: apps/v1 metadata:name: zookeeper3 spec:replicas: 1selector:matchLabels:app: zookeepertemplate:metadata:labels:app: zookeeperserver-id: "3"spec:containers:- name: serverimage: 10.0.0.104/baseimages/zookeeper:v1imagePullPolicy: Alwaysenv:- name: MYIDvalue: "3"- name: SERVERSvalue: "zookeeper1,zookeeper2,zookeeper3"- name: JVMFLAGSvalue: "-Xmx2G"ports:- containerPort: 2181- containerPort: 2888- containerPort: 3888volumeMounts:- mountPath: "/zookeeper/data"name: zookeeper-datadir-pvc-3volumes:- name: zookeeper-datadir-pvc-3persistentVolumeClaim:claimName: zookeeper-datadir-pvc-3--- apiVersion: v1 kind: Service metadata:name: zookeeper spec:ports:- name: clientport: 2181selector:app: zookeeper --- apiVersion: v1 kind: Service metadata:name: zookeeper1 spec:type: NodePort ports:- name: clientport: 2181nodePort: 42181- name: followersport: 2888- name: electionport: 3888selector:app: zookeeperserver-id: "1" --- apiVersion: v1 kind: Service metadata:name: zookeeper2 spec:type: NodePort ports:- name: clientport: 2181nodePort: 42182- name: followersport: 2888- name: electionport: 3888selector:app: zookeeperserver-id: "2" --- apiVersion: v1 kind: Service metadata:name: zookeeper3 spec:type: NodePort ports:- name: clientport: 2181nodePort: 42183- name: followersport: 2888- name: electionport: 3888selector:app: zookeeperserver-id: "3"root@master1:~/yaml/zookeeper# kubectl apply -f zk-deploy.yaml root@master1:~/yaml/zookeeper# kubectl get pod NAME READY STATUS RESTARTS AGE zookeeper1-8484c8b5f5-q6fzl 1/1 Running 0 117s zookeeper2-794b6d6fcb-98wjb 1/1 Running 0 117s zookeeper3-7b867f89bf-7bl29 1/1 Running 0 117s驗證共享存儲id文件是否唯一
在存儲上 root@haproxy1:/data/k8sdata# cat zookeeper-data-1/ myid version-2/ root@haproxy1:/data/k8sdata# cat zookeeper-data-1/myid 1 root@haproxy1:/data/k8sdata# cat zookeeper-data-2/myid 2 root@haproxy1:/data/k8sdata# cat zookeeper-data-3/myid 3 可以在任一zookeeper查看狀態(tài) root@master1:~/yaml/zookeeper# kubectl exec -it zookeeper1-8484c8b5f5-q6fzl bash bash-4.3# /zookeeper/bin/zkServer.sh status ZooKeeper JMX enabled by default ZooKeeper remote JMX Port set to 9010 ZooKeeper remote JMX authenticate set to false ZooKeeper remote JMX ssl set to false ZooKeeper remote JMX log4j set to true Using config: /zookeeper/bin/../conf/zoo.cfg Mode: leader3.2 自定義鏡像運行nginx與tomcat
3.2.1制作nginx鏡像
準(zhǔn)備運行腳本 root@master1:~/scripts/Dockerfile/nginx# cat run.sh #!/bin/bash service nginx start 準(zhǔn)備配置文件 root@master1:~/scripts/Dockerfile/nginx# grep -v "#|^$" nginx.conf worker_processes 1; events {worker_connections 1024; } http {include mime.types;default_type application/octet-stream;sendfile on;keepalive_timeout 65;upstream tomcat_server{server tomcat-svc.test.local:80;}server {listen 80;server_name localhost;#charset koi8-r;#access_log logs/host.access.log main;location / {root html;index index.html index.htm;}location /webapp {root html;index index.html index.htm;} location /myapp {proxy_pass http://10.0.0.101:8080;proxy_set_header Host $host;proxy_set_header X_forwarded_For $proxy_add_x_forwarded_for;proxy_set_header X_Real-IP $remote_addr}error_page 500 502 503 504 /50x.html;location = /50x.html {root html;}} }#編寫Dockerfile tail -f /etc/hostsroot@master1:~/scripts/Dockerfile/nginx# cat Dockerfile FROM nginx:latestCOPY nginx.conf /usr/local/ngin/conf/ ADD run.sh /scripts/run.shEXPOSE 80 443CMD [ "/scripts/run.sh" ]#進行編譯到harbor倉庫 root@master1:~/scripts/Dockerfile/nginx# cat run-nginx.sh #!/bin/bash docker build -t 10.0.0.104/baseimages/nginx-base:v1.8.0 . sleep 1 docker push 10.0.0.104/baseimages/nginx-base:v1.8.0 root@master1:~/scripts/Dockerfile/nginx# bash run-nginx.sh3.2.2制作tomcat鏡像
準(zhǔn)備運行腳本 root@master1:~/scripts/Dockerfile/tomcat# cat run.sh #!/bin/bash /usr/local/tomcat/bin/catalina.sh start tail -f /etc/hosts #編寫Dockerfile root@master1:~/scripts/Dockerfile/tomcat# cat Dockerfile FROM 10.0.0.104/baseimages/jdk-base:v17 ADD apache-tomcat-9.0.54.tar.gz /usr/local/src RUN ln -sv /usr/local/src/apache-tomcat-9.0.54 /usr/local/tomcat && mkdir /usr/local/jdk/jre/bin -p && ln -sv /usr/local/jdk/bin/java /usr/local/jdk/jre/bin/java ADD app.tar.gz /usr/local/tomcat/webapp/test ADD run.sh /usr/local/tomcat/bin/run.shEXPOSE 8080 443CMD [ "/usr/local/tomcat/bin/run.sh" ]#進行編譯到harbor倉庫 root@master1:~/scripts/Dockerfile/tomcat# cat run-tomcat.sh #!/bin/bash docker build -t 10.0.0.104/baseimages/tomcat-base:v9.0.54 . sleep 1 docker push 10.0.0.104/baseimages/tomcat-base:v9.0.54 #進行編譯 root@master1:~/scripts/Dockerfile/tomcat# bash run-tomcat.sh3.2.3 使用k8s運行nginx
#運行nginx root@master1:~/yaml/nginx-tomcat# cat nginx.yaml apiVersion: apps/v1 kind: Deployment metadata:name: nginxnamespace: testlabels:app: nginx spec:selector:matchLabels:app: nginxreplicas: 1template:metadata:labels:app: nginxspec:# initContainers:# Init containers are exactly like regular containers, except:# - Init containers always run to completion.# - Each init container must complete successfully before the next one starts.containers:- name: nginximage: 10.0.0.104/baseimages/nginx-base:v1.8.0resources:requests:cpu: 100mmemory: 100Milimits:cpu: 100mmemory: 100Miports:- containerPort: 80name: nginxvolumeMounts:- name: localtimemountPath: /etc/localtime- name: nginx-staticmountPath: /usr/share/nginx/html/readOnly: falsevolumes:- name: localtimehostPath:path: /usr/share/zoneinfo/Asia/Shanghai- name: nginx-static #進行nfs掛載,掛載前要先在對應(yīng)的nfs server上面進行nfs server的安裝nfs:server: 10.0.0.111 path: /data/nginx/restartPolicy: Always --- apiVersion: v1 kind: Service metadata:name: nginx-svcnamespace: test spec:selector:app: nginxtype: NodePortports:- name: nginx-svcprotocol: TCPport: 85targetPort: 80# If you set the `spec.type` field to `NodePort` and you want a specific port number,# you can specify a value in the `spec.ports[*].nodePort` field.nodePort: 30010 root@master1:~/yaml/nginx-tomcat# kubectl apply -f nginx.yaml3.2.4 使用k8s運行通tomcat
root@master1:~/yaml/nginx-tomcat# cat tomcat.yaml apiVersion: apps/v1 kind: Deployment metadata:name: tomcat-deploynamespace: testlabels:app: tomcat-deploy spec:selector:matchLabels:app: tomcat-deployreplicas: 1template:metadata:labels:app: tomcat-deployspec:containers:- name: tomcat-deployimage: 10.0.0.104/baseimages/tomcat-base:v9.0.54ports:- containerPort: 8080name: tomcat-svcvolumeMounts:- name: localtimemountPath: /etc/localtime- name: tomcat-htmlmountPath: /usr/local/tomcat/webapp/volumes:- name: localtimehostPath:path: /usr/share/zoneinfo/Asia/Shanghai- name: tomcat-htmlnfs:server: 10.0.0.111path: /data/tomcatrestartPolicy: Always---apiVersion: v1 kind: Service metadata:name: tomcat-svcnamespace: test spec:selector:app: tomcat-deploytype: NodePortports:- name: tomcat-svcprotocol: TCPport: 8081targetPort: 8080nodePort: 30004 root@master1:~/yaml/nginx-tomcat# kubectl apply -f tomcat.yaml五、k8s結(jié)合ceph實現(xiàn)數(shù)據(jù)的持久化和共享
5.1 rbd結(jié)合k8s提供存儲卷及動態(tài)存儲卷使用
讓k8s中的pod可以訪問ceph中的rbd提供的鏡像作為存儲設(shè)備,需要在ceph創(chuàng)建rbd并且讓k8s node節(jié)點能夠通過ceph認證
5.1.1 創(chuàng)建初始化rbd
#創(chuàng)建存儲池 root@ceph-deploy:~# ceph osd pool create shijie-rbd-pool1 32 32 pool 'shijie-rbd-pool1' created root@ceph-deploy:~# ceph osd pool ls mypool myrbd1 shijie-rbd-pool1#存儲池啟用rbd root@ceph-deploy:~# ceph osd pool application enable shijie-rbd-pool1 rbd enabled application 'rbd' on pool 'shijie-rbd-pool1'#初始化rbd root@ceph-deploy:~# rbd pool init -p shijie-rbd-pool15.1.2 創(chuàng)建img鏡像
root@ceph-deploy:~# rbd create shijie-img-img1 --size 3G --pool shijie-rbd-pool1 --image-format 2 --image-feature layering root@ceph-deploy:~# rbd ls --pool shijie-rbd-pool1 shijie-img-img1 root@ceph-deploy:~# rbd --image shijie-img-img1 --pool shijie-rbd-pool1 info rbd image 'shijie-img-img1':size 3GiB in 768 objectsorder 22 (4MiB objects)block_name_prefix: rbd_data.375c6b8b4567format: 2features: layeringflags: create_timestamp: Wed Oct 27 00:40:41 20215.1.3 安裝ceph-common
在所有master和node都安裝上ceph-common
#更新源 wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add - sudo apt-add-repository 'deb https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific/ focal main' $sudo apt updateroot@master1:~# apt install ceph-common -y5.1.4 創(chuàng)建ceph普通用戶與授權(quán)
root@ceph-deploy:~# ceph auth get-or-create client.test-shijie mon 'allow r' osd 'allow * pool=shijie-rbd-pool1' [client.test-shijie]key = AQAjCXlhxg6mHRAABELoPtsB0vrQipUGfRLO9g== 驗證用戶 root@ceph-deploy:~# ceph auth get client.test-shijie exported keyring for client.test-shijie [client.test-shijie]key = AQAjCXlhxg6mHRAABELoPtsB0vrQipUGfRLO9g==caps mon = "allow r"caps osd = "allow * pool=shijie-rbd-pool1"導(dǎo)出用戶信息到keyring文件 root@ceph-deploy:~/ceph-cluster# ceph auth get client.test-shijie -o ceph.client.test-shijie.keyring把ceph.conf和keyring文件拷貝到master和node root@ceph-deploy:~/ceph-cluster# scp /etc/ceph/ceph.conf ceph.client.test-shijie.keyring root@10.0.0.101:/etc/ceph/ root@ceph-deploy:~/ceph-cluster# scp /etc/ceph/ceph.conf ceph.client.test-shijie.keyring root@10.0.0.102:/etc/ceph/ root@ceph-deploy:~/ceph-cluster# scp /etc/ceph/ceph.conf ceph.client.test-shijie.keyring root@10.0.0.103:/etc/ceph/ root@ceph-deploy:~/ceph-cluster# scp /etc/ceph/ceph.conf ceph.client.test-shijie.keyring root@10.0.0.111:/etc/ceph/ root@ceph-deploy:~/ceph-cluster# scp /etc/ceph/ceph.conf ceph.client.test-shijie.keyring root@10.0.0.112:/etc/ceph/ root@ceph-deploy:~/ceph-cluster# scp /etc/ceph/ceph.conf ceph.client.test-shijie.keyring root@10.0.0.113:/etc/ceph/#進行驗證用戶 root@node1:~# ceph --user test-shijie -scluster:id: 7e77062f-814b-4782-ba3d-df00c48eafe6health: HEALTH_WARNapplication not enabled on 1 pool(s)services:mon: 3 daemons, quorum ceph-node1,ceph-node2,ceph-node3mgr: ceph-node1(active), standbys: ceph-node2osd: 9 osds: 9 up, 9 indata:pools: 3 pools, 128 pgsobjects: 11 objects, 670KiBusage: 9.07GiB used, 440GiB / 449GiB availpgs: 128 active+clean#驗證鏡像的訪問權(quán)限 root@node1:~# rbd --id test-shijie ls --pool=shijie-rbd-pool1 shijie-img-img15.1.5 k8s節(jié)點配置主機名解析
master和node
root@master1:~# vim /etc/hosts 10.0.0.91 ceph-deploy.example.local ceph-deploy 10.0.0.92 ceph-node1.example.local ceph-node1 10.0.0.93 ceph-node2.example.local ceph-node2 10.0.0.94 ceph-node3.example.local ceph-node35.1.6 通過keyring文件掛載rbd
基于ceph提供的rbd實現(xiàn)存儲卷的動態(tài)提供,由兩種實現(xiàn)方式,一是通過宿主機的keyring文件掛載rbd,另外一種是通過將keyring中的key定義為k8s中的secret,然后pod通過srcret掛載rbd
root@master1:~/yaml/20211010/ceph-case# cat case2-nginx-keyring.yaml apiVersion: apps/v1 kind: Deployment metadata:name: nginx-deployment spec:replicas: 1selector:matchLabels: #rs or deploymentapp: ng-deploy-80template:metadata:labels:app: ng-deploy-80spec:containers:- name: ng-deploy-80image: nginxports:- containerPort: 80volumeMounts:- name: rbd-data1mountPath: /datavolumes:- name: rbd-data1rbd:monitors:- '10.0.0.92:6789'- '10.0.0.93:6789'- '10.0.0.94:6789'pool: shijie-rbd-pool1image: shijie-img-img1fsType: ext4readOnly: falseuser: test-shijiekeyring: /etc/ceph/ceph.client.test-shijie.keyringroot@master1:~/yaml/20211010/ceph-case# kubectl apply -f case2-nginx-keyring.yaml#進行驗證掛載 root@master1:~/yaml/20211010/ceph-case# kubectl exec -it deploy/nginx-deployment sh # df -h|grep rbd /dev/rbd0 2.9G 9.1M 2.9G 1% /data # echo "123456" > /data/test.txt5.1.7 通過secret掛載rbd
#準(zhǔn)備好secret root@master1:~# cat /etc/ceph/ceph.client.test-shijie.keyring [client.test-shijie]key = AQAjCXlhxg6mHRAABELoPtsB0vrQipUGfRLO9g== #要使用到caps mon = "allow r"caps osd = "allow * pool=shijie-rbd-pool1"#進行base64進行加密獲取key root@master1:~# echo AQAjCXlhxg6mHRAABELoPtsB0vrQipUGfRLO9g== |base64#準(zhǔn)備secret的配置文件生成secretRef root@master1:~/yaml/20211010/ceph-case# cat case3-secret-client-shijie.yaml apiVersion: v1 kind: Secret metadata:name: ceph-secret-test-shijie type: "kubernetes.io/rbd" data:key: QVFBakNYbGh4ZzZtSFJBQUJFTG9QdHNCMHZyUWlwVUdmUkxPOWc9PQo= #就是剛才生成的加密后#生成secretRef root@master1:~/yaml/20211010/ceph-case# kubectl apply -f case3-secret-client-shijie.yaml準(zhǔn)備好deploy的配置文件 root@master1:~/yaml/20211010/ceph-case# cat case4-nginx-secret.yaml apiVersion: apps/v1 kind: Deployment metadata:name: nginx-deployment spec:replicas: 1selector:matchLabels: #rs or deploymentapp: ng-deploy-80template:metadata:labels:app: ng-deploy-80spec:containers:- name: ng-deploy-80image: nginxports:- containerPort: 80volumeMounts:- name: rbd-data1mountPath: /datavolumes:- name: rbd-data1rbd:monitors:- '10.0.0.92:6789'- '10.0.0.93:6789'- '10.0.0.94:6789'pool: shijie-rbd-pool1image: shijie-img-img1fsType: ext4readOnly: falseuser: test-shijiesecretRef:name: ceph-secret-test-shijie #生成的秘鑰文件#進行驗證 root@master1:~/yaml/20211010/ceph-case# kubectl get pod NAME READY STATUS RESTARTS AGE net-test1 1/1 Running 4 22d net-test3 1/1 Running 4 22d nginx-deployment-6f874c88dc-mmlsf 1/1 Running 0 6m11s # df -h|grep rbd /dev/rbd0 2.9G 9.1M 2.9G 1% /data # ls /data lost+found test.txt5.1.8 通過StorageClass掛載rbd
獲取admin的key root@ceph-deploy:~/ceph-cluster# cat ceph.client.admin.keyring [client.admin]key = AQDkindhXMU6HxAA2wOghbO8vNRjhF5Z2ZM4Yg==#進行base64加密 root@ceph-deploy:~/ceph-cluster# echo AQDkindhXMU6HxAA2wOghbO8vNRjhF5Z2ZM4Yg== | base64 QVFEa2luZGhYTVU2SHhBQTJ3T2doYk84dk5SamhGNVoyWk00WWc9PQo=#進行生成admin的Secret root@master1:~/yaml/20211010/ceph-case# cat case5-secret-admin.yaml apiVersion: v1 kind: Secret metadata:name: ceph-secret-admin type: "kubernetes.io/rbd" data:key: QVFEa2luZGhYTVU2SHhBQTJ3T2doYk84dk5SamhGNVoyWk00WWc9PQo= #base64生成的keyroot@master1:~/yaml/20211010/ceph-case# kubectl apply -f case5-secret-admin.yaml secret/ceph-secret-admin created#進行驗證 root@master1:~/yaml/20211010/ceph-case# kubectl get secret NAME TYPE DATA AGE ceph-secret-admin kubernetes.io/rbd 1 31s default-token-bbngc kubernetes.io/service-account-token 3 22d#創(chuàng)建一個普通用戶 root@master1:~/yaml/20211010/ceph-case# kubectl apply -f case3-secret-client-shijie.yaml secret/ceph-secret-test-shijie created root@master1:~/yaml/20211010/ceph-case# kubectl get secret NAME TYPE DATA AGE ceph-secret-admin kubernetes.io/rbd 1 9m28s ceph-secret-test-shijie kubernetes.io/rbd 1 21s default-token-bbngc kubernetes.io/service-account-token 3 22d#創(chuàng)建存儲類 root@master1:~/yaml/20211010/ceph-case# cat case6-ceph-storage-class.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata:name: ceph-storage-class-shijieannotations:storageclass.kubernetes.io/is-default-class: "true" #設(shè)置為默認存儲類 provisioner: kubernetes.io/rbd parameters:monitors: 10.0.0.92:6789,10.0.0.93:6789,10.0.0.94:6789adminId: adminadminSecretName: ceph-secret-adminadminSecretNamespace: default pool: shijie-rbd-pool1userId: test-shijieuserSecretName: ceph-secret-test-shijie root@master1:~/yaml/20211010/ceph-case# kubectl apply -f case6-ceph-storage-class.yaml storageclass.storage.k8s.io/ceph-storage-class-shijie created#進行驗證 root@master1:~/yaml/20211010/ceph-case# kubectl get storageclass NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE ceph-storage-class-shijie (default) kubernetes.io/rbd Delete Immediate false 4s#調(diào)用存儲類,創(chuàng)建pvc root@master1:~/yaml/20211010/ceph-case# cat case7-mysql-pvc.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata:name: mysql-data-pvc spec:accessModes:- ReadWriteOncestorageClassName: ceph-storage-class-shijie resources:requests:storage: '5Gi'root@master1:~/yaml/20211010/ceph-case# kubectl apply -f case7-mysql-pvc.yaml persistentvolumeclaim/mysql-data-pvc created#驗證查看 root@master1:~/yaml/20211010/ceph-case# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mysql-data-pvc Bound pvc-7704d1ac-d43c-4391-b4b6-25da14cb7d1b 5Gi RWO ceph-storage-class-shijie 3m31s#進行啟動mysql服務(wù) root@master1:~/yaml/20211010/ceph-case# cat case8-mysql-single.yaml apiVersion: apps/v1 kind: Deployment metadata:name: mysql spec:selector:matchLabels:app: mysqlstrategy:type: Recreatetemplate:metadata:labels:app: mysqlspec:containers:- image: 10.0.0.104/baseimages/mysql:5.6.46 name: mysqlenv:# Use secret in real usage- name: MYSQL_ROOT_PASSWORDvalue: test123456ports:- containerPort: 3306name: mysqlvolumeMounts:- name: mysql-persistent-storagemountPath: /var/lib/mysqlvolumes:- name: mysql-persistent-storagepersistentVolumeClaim:claimName: mysql-data-pvc --- kind: Service apiVersion: v1 metadata:labels:app: mysql-service-label name: mysql-service spec:type: NodePortports:- name: httpport: 3306protocol: TCPtargetPort: 3306nodePort: 43306selector:app: mysqlroot@master1:~/yaml/20211010/ceph-case# kubectl apply -f case8-mysql-single.yaml deployment.apps/mysql created service/mysql-service unchanged#進行驗證 root@master1:~/yaml/20211010/ceph-case# kubectl exec -it deploy/mysql bash root@mysql-55f8f7d588-jzm5m:/# df -h Filesystem Size Used Avail Use% Mounted on overlay 19G 7.7G 11G 44% / tmpfs 64M 0 64M 0% /dev tmpfs 980M 0 980M 0% /sys/fs/cgroup /dev/mapper/ubuntu--vg-ubuntu--lv 19G 7.7G 11G 44% /etc/hosts shm 64M 0 64M 0% /dev/shm /dev/rbd0 4.9G 131M 4.8G 3% /var/lib/mysql #有的話證明成功 tmpfs 980M 12K 980M 1% /run/secrets/kubernetes.io/serviceaccount tmpfs 980M 0 980M 0% /proc/acpi tmpfs 980M 0 980M 0% /proc/scsi tmpfs 980M 0 980M 0% /sys/firmware5.1.9 多節(jié)點掛載cephfs
要確保node下面有ceph admin的認證文件 root@ceph-deploy:~/ceph-cluster# scp ceph.client.admin.keyring 10.0.0.111:/etc/ceph/ root@10.0.0.111's password: ceph.client.admin.keyring 100% 63 26.0KB/s 00:00 root@ceph-deploy:~/ceph-cluster# scp ceph.client.admin.keyring 10.0.0.112:/etc/ceph/ root@10.0.0.112's password: ceph.client.admin.keyring 100% 63 59.1KB/s 00:00 root@ceph-deploy:~/ceph-cluster# scp ceph.client.admin.keyring 10.0.0.113:/etc/ceph/ root@10.0.0.113's password: ceph.client.admin.keyring root@master1:~/yaml/20211010/ceph-case# cat case9-nginx-cephfs.yaml apiVersion: apps/v1 kind: Deployment metadata:name: nginx-deployment spec:replicas: 3selector:matchLabels: #rs or deploymentapp: ng-deploy-80template:metadata:labels:app: ng-deploy-80spec:containers:- name: ng-deploy-80image: nginxports:- containerPort: 80volumeMounts:- name: test-staticdata-cephfs mountPath: /usr/share/nginx/html/ volumes:- name: test-staticdata-cephfscephfs:monitors:- '10.0.0.92:6789'- '10.0.0.93:6789'- '10.0.0.94:6789'path: /user: adminsecretRef:name: ceph-secret-adminroot@master1:~/yaml/20211010/ceph-case# kubectl apply -f case9-nginx-cephfs.yaml deployment.apps/nginx-deployment created#進行驗證 root@master1:~/yaml/20211010/ceph-case# kubectl exec -it nginx-deployment-5bf64679b6-2v8tv bash root@nginx-deployment-5bf64679b6-2v8tv:/# df -h Filesystem Size Used Avail Use% Mounted on 10.0.0.92:6789,10.0.0.93:6789,10.0.0.94:6789:/ 140G 0 140G 0% /usr/share/nginx/html六、Pod狀態(tài)和探針
6.1 Pod 常見的狀態(tài)
-
Pending:掛起,我們在請求創(chuàng)建pod時,條件不滿足,調(diào)度沒有完成,沒有任何一個節(jié)點能滿足調(diào)度條件。已經(jīng)創(chuàng)建了但是沒有適合它運行的節(jié)點叫做掛起,這其中也包含集群為容器創(chuàng)建網(wǎng)絡(luò),或者下載鏡像的過程。
-
Running:Pod內(nèi)所有的容器都已經(jīng)被創(chuàng)建,且至少一個容器正在處于運行狀態(tài)、正在啟動狀態(tài)或者重啟狀態(tài)。
-
Succeeded:Pod中所以容器都執(zhí)行成功后退出,并且沒有處于重啟的容器。
-
Failed:Pod中所以容器都已退出,但是至少還有一個容器退出時為失敗狀態(tài)。
-
Unknown:未知狀態(tài),所謂pod是什么狀態(tài)是apiserver和運行在pod節(jié)點的kubelet進行通信獲取狀態(tài)信息的,如果節(jié)點之上的kubelet本身出故障,那么apiserver就連不上kubelet,得不到信息了,就會看Unknown
探針是由 kubelet 對容器執(zhí)行的定期診斷。要執(zhí)行診斷,kubelet 調(diào)用由容器實現(xiàn)的 Handler。有三種類型的處理程序:
- ExecAction:在容器內(nèi)執(zhí)行指定命令。如果命令退出時返回碼為 0 則認為診斷成功。
- TCPSocketAction:對指定端口上的容器的 IP 地址進行 TCP 檢查。如果端口打開,則診斷被認為是成功的。
- HTTPGetAction:對指定的端口和路徑上的容器的 IP 地址執(zhí)行 HTTP Get 請求。如果響應(yīng)的狀態(tài)碼大于等于200 且小于 400,則診斷被認為是成功的
6.2 每次探測都將獲得以下三種結(jié)果之一:
成功:容器通過了診斷。
失敗:容器未通過診斷。
未知:診斷失敗,因此不會采取任何行動
6.3 Pod探針
6.3.1 Pod 兩種探針 livenessProbe和readinessProbe
- livenessProbe(存活探針):指示容器是否正在運行。如果存活探測失敗,則 kubelet 會殺死容器,并且容器將受到其 重啟策略 的影響。如果容器不提供存活探針,則默認狀態(tài)為 Success ,livenessProbe用于控制是否重啟pod
- readinessProbe(就緒探針):指示容器是否準(zhǔn)備好服務(wù)請求。如果就緒探測失敗,端點控制器將從與 Pod 匹配的所有 Service 的端點中刪除該 Pod 的 IP 地址。初始延遲之前的就緒狀態(tài)默認為 Failure。如果容器不提供就緒探針,則默認狀態(tài)為 Success , readinessProbe用于控制pod是否添加到service
6.3.2 HTTP探針
用HTTP的OPTIONS:這個方法極少使用。它用于獲取當(dāng)前URL所支持的方法。若請求成功,則它會在HTTP頭中包含一個名為“Allow”的頭,其中的值是所支持的方法,如“GET, POST”。這樣就能夠檢測出服務(wù)是否支持該方法繼而檢測存活
HTTP探針可以在 httpGet 上配置額外的字段:
host:連接使用的主機名,默認是 Pod 的 IP。也可以在 HTTP 頭中設(shè)置 “Host” 來代替。
scheme:用于設(shè)置連接主機的方式(HTTP 還是 HTTPS)。默認是 HTTP。
path:訪問 HTTP 服務(wù)的路徑。
httpHeaders:請求中自定義的 HTTP 頭。HTTP 頭字段允許重復(fù)。
port:訪問容器的端口號或者端口名。如果數(shù)字必須在 1 ~ 65535 之間。
案例:
root@master1:~/yaml/20211010/n56-yaml-20211010# cat nginx.yaml apiVersion: apps/v1 kind: Deployment metadata:name: nginx-deployment spec:replicas: 1selector:matchLabels: #rs or deploymentapp: ng-deploy-80#matchExpressions:# - {key: app, operator: In, values: [ng-deploy-80,ng-rs-81]}template:metadata:labels:app: ng-deploy-80spec:containers:- name: ng-deploy-80image: nginx:1.17.5 ports:- containerPort: 80#readinessProbe:livenessProbe:httpGet:#path: /monitor/monitor.htmlpath: /index.htmlport: 80initialDelaySeconds: 5periodSeconds: 3timeoutSeconds: 5successThreshold: 1failureThreshold: 3--- apiVersion: v1 kind: Service metadata:name: ng-deploy-80 spec:ports:- name: httpport: 81targetPort: 80nodePort: 40012protocol: TCPtype: NodePortselector:app: ng-deploy-80root@master1:~/yaml/20211010/n56-yaml-20211010# kubectl apply -f nginx.yaml deployment.apps/nginx-deployment created service/ng-deploy-80 created6.3.3 TCP探針
如果僅需要檢查是否可以建立 TCP 連接,則可以指定 TCP 探針。如果建立 TCP 連接,則將 Pod 標(biāo)記為運行狀況良好。對于不適合使用 HTTP 探針的 gRPC 或 FTP 服務(wù)器,TCP 探針可能會有用。
root@master1:~/yaml/20211010/n56-yaml-20211010# cat tcp.yaml apiVersion: apps/v1 kind: Deployment metadata:name: nginx-deployment spec:replicas: 1selector:matchLabels: #rs or deploymentapp: ng-deploy-80#matchExpressions:# - {key: app, operator: In, values: [ng-deploy-80,ng-rs-81]}template:metadata:labels:app: ng-deploy-80spec:containers:- name: ng-deploy-80image: nginx:1.17.5 ports:- containerPort: 80livenessProbe:tcpSocket:port: 80initialDelaySeconds: 5periodSeconds: 3timeoutSeconds: 5successThreshold: 1failureThreshold: 3--- apiVersion: v1 kind: Service metadata:name: ng-deploy-80 spec:ports:- name: httpport: 81targetPort: 80nodePort: 40012protocol: TCPtype: NodePortselector:app: ng-deploy-80 root@master1:~/yaml/20211010/n56-yaml-20211010# kubectl apply -f tcp.yaml deployment.apps/nginx-deployment configured service/ng-deploy-80 unchanged6.3.2 ExecAction探針
可以基于指定命令對pod進行特定的狀態(tài)檢查
root@master1:~/yaml/20211010/n56-yaml-20211010# cat redis.yaml apiVersion: apps/v1 kind: Deployment metadata:name: redis-deployment spec:replicas: 1selector:matchLabels: #rs or deploymentapp: redis-deploy-6379#matchExpressions:# - {key: app, operator: In, values: [redis-deploy-6379,ng-rs-81]}template:metadata:labels:app: redis-deploy-6379spec:containers:- name: redis-deploy-6379image: redisports:- containerPort: 6379readinessProbe:exec:command:- /usr/local/bin/redis-cli- quitinitialDelaySeconds: 5periodSeconds: 3timeoutSeconds: 5successThreshold: 1failureThreshold: 3livenessProbe:exec:command:- /usr/local/bin/redis-cli- quitinitialDelaySeconds: 5periodSeconds: 3timeoutSeconds: 5successThreshold: 1failureThreshold: 3--- apiVersion: v1 kind: Service metadata:name: redis-deploy-6379 spec:ports:- name: httpport: 6379targetPort: 6379nodePort: 40016protocol: TCPtype: NodePortselector:app: redis-deploy-6379root@master1:~/yaml/20211010/n56-yaml-20211010# kubectl apply -f redis.yaml deployment.apps/redis-deployment created service/redis-deploy-6379 created總結(jié)
以上是生活随笔為你收集整理的Kubernetes进阶使用(二)的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 知识图谱在推荐系统中的应用全面调研
- 下一篇: 【笔记本维修】【基础知识】【二极管 三极