日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 综合教程 >内容正文

综合教程

Kubernetes安装EFK教程(非存储持久化方式部署)

發布時間:2024/1/3 综合教程 35 生活家
生活随笔 收集整理的這篇文章主要介紹了 Kubernetes安装EFK教程(非存储持久化方式部署) 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

1.簡介

這里所指的EFK是指:ElasticSearch,Fluentd,Kibana

ElasticSearch

Elasticsearch是一個基于Apache Lucene™的開源搜索和數據分析引擎引擎,Elasticsearch使用Java進行開發,并使用Lucene作為其核心實現所有索引和搜索的功能。它的目的是通過簡單的RESTful API來隱藏Lucene的復雜性,從而讓全文搜索變得簡單。Elasticsearch不僅僅是Lucene和全文搜索,它還提供如下的能力:

分布式的實時文件存儲,每個字段都被索引并可被搜索;
分布式的實時分析搜索引擎;
可以擴展到上百臺服務器,處理PB級結構化或非結構化數據。

在Elasticsearch中,包含多個索引(Index),相應的每個索引可以包含多個類型(Type),這些不同的類型每個都可以存儲多個文檔(Document),每個文檔又有多個屬性。索引 (index) 類似于傳統關系數據庫中的一個數據庫,是一個存儲關系型文檔的地方。Elasticsearch 使用的是標準的 RESTful API 和 JSON。此外,還構建和維護了很多其他語言的客戶端,例如 Java, Python, .NET, 和 PHP。

Fluentd

Fluentd是一個開源數據收集器,通過它能對數據進行統一收集和消費,能夠更好地使用和理解數據。Fluentd將數據結構化為JSON,從而能夠統一處理日志數據,包括:收集、過濾、緩存和輸出。Fluentd是一個基于插件體系的架構,包括輸入插件、輸出插件、過濾插件、解析插件、格式化插件、緩存插件和存儲插件,通過插件可以擴展和更好的使用Fluentd。

Kibana

Kibana是一個開源的分析與可視化平臺,被設計用于和Elasticsearch一起使用的。通過kibana可以搜索、查看和交互存放在Elasticsearch中的數據,利用各種不同的圖表、表格和地圖等,Kibana能夠對數據進行分析與可視化

2.下載需要用到的EFK的yaml文件

kubernetes的github

https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch

溫馨提示:
   此github有非存儲持久化方式部署,需要存儲持久化請修改現有的yaml
  

下載連接

mdkir /root/EFK
cd /root/EFK

wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/fluentd-elasticsearch/es-service.yaml

wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/fluentd-elasticsearch/es-statefulset.yaml

wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/fluentd-elasticsearch/fluentd-es-configmap.yaml

wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/fluentd-elasticsearch/fluentd-es-ds.yaml

wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/fluentd-elasticsearch/kibana-deployment.yaml

wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/fluentd-elasticsearch/kibana-service.yaml

或者使用easzlab的也可以

https://github.com/easzlab/kubeasz/tree/master/manifests/efk

溫馨提示:
   此github,有非存儲持久化部署方式,也有持久化部署方式。

下載連接地址:

wget https://raw.githubusercontent.com/easzlab/kubeasz/master/manifests/efk/es-without-pv/es-statefulset.yaml
溫馨提示:es-static-pv和es-dynamic-pv分別是靜態pv和動太pv存儲持久方案,如需要可參考,es-without-pv此文件夾是非存儲持久方案

wget https://raw.githubusercontent.com/easzlab/kubeasz/master/manifests/efk/es-service.yaml

wget https://raw.githubusercontent.com/easzlab/kubeasz/master/manifests/efk/fluentd-es-configmap.yaml

wget https://raw.githubusercontent.com/easzlab/kubeasz/master/manifests/efk/fluentd-es-ds.yaml

wget https://raw.githubusercontent.com/easzlab/kubeasz/master/manifests/efk/kibana-deployment.yaml

wget https://raw.githubusercontent.com/easzlab/kubeasz/master/manifests/efk/kibana-service.yaml

3.下載EFK需要的鏡像

yaml源文件需要的鏡像及地址

elasticsearch:v7.4.2        quay.io/fluentd_elasticsearch/elasticsearch:v7.4.2
fluentd:v2.8.0              quay.io/fluentd_elasticsearch/fluentd:v2.8.0
kibana-oss:7.4.2            docker.elastic.co/kibana/kibana-oss:7.4.2

溫馨提示:因v7.4.2測試了幾次都存在問題,elasticsearch一直重啟報錯,故更新為6.6.1
elasticsearch:v6.6.1          quay.io/fluentd_elasticsearch/elasticsearch:v6.6.1
fluentd-elasticsearch:v2.4.0  quay.io/fluentd_elasticsearch/fluentd_elasticsearch:v2.4.0
kibana-oss:6.6.1              docker.elastic.co/kibana/kibana-oss:6.6.1

因不能上網,故在阿里云鏡像上直接找到相對應的連接

elasticsearch:v6.6.1            registry.cn-hangzhou.aliyuncs.com/yfhub/elasticsearch:v6.6.1
fluentd-elasticsearch:v2.4.0    registry.cn-hangzhou.aliyuncs.com/yfhub/fluentd-elasticsearch:v2.4.0
kibana-oss:6.6.1                registry.cn-hangzhou.aliyuncs.com/yfhub/kibana-oss:6.6.1

使用docker pull把鏡像拉下來

docker pull registry.cn-hangzhou.aliyuncs.com/yfhub/elasticsearch:6.6.1
docker pull registry.cn-hangzhou.aliyuncs.com/yfhub/fluentd-elasticsearch:v2.4.0
docker pull registry.cn-hangzhou.aliyuncs.com/yfhub/kibana-oss:6.6.1

把鏡像打標簽使之與yaml需要的一致

docker tag registry.cn-hangzhou.aliyuncs.com/yfhub/elasticsearch:6.6.1 quay.io/fluentd_elasticsearch/elasticsearch:v6.6.1

docker tag registry.cn-hangzhou.aliyuncs.com/yfhub/fluentd-elasticsearch:v2.4.0  quay.io/fluentd_elasticsearch/fluentd_elasticsearch:v2.4.0

docker tag registry.cn-hangzhou.aliyuncs.com/yfhub/kibana-oss:6.6.1 docker.elastic.co/kibana/kibana-oss:6.6.1

上傳打標簽前的節點

docker rmi registry.cn-hangzhou.aliyuncs.com/yfhub/elasticsearch:6.6.1

docker rmi registry.cn-hangzhou.aliyuncs.com/yfhub/fluentd-elasticsearch:v2.4.0

docker rmi registry.cn-hangzhou.aliyuncs.com/yfhub/kibana-oss:6.6.1

把鏡像保存為tar,方便分發到其它的Node節點并導入

docker save -o elasticsearch-v6.6.1           quay.io/fluentd_elasticsearch/elasticsearch:v6.6.1

docker save -o fluentd-elasticsearch-v2.4.0 quay.io/fluentd_elasticsearch/fluentd_elasticsearch:v2.4.0

docker save -o kibana-oss-6.6.1               docker.elastic.co/kibana/kibana-oss:6.6.1

把打包的鏡像傳到其它節點

scp -r elasticsearch-v6.6.1 fluentd-elasticsearch-v2.4.0  kibana-oss-6.6.12 k8s-node02:/root/

**在Node02節點上導入鏡像

docker load -i elasticsearch-v6.6.1 && docker load -i fluentd-elasticsearch-v2.4.0 && docker load -i kibana-oss-6.6.1

4.對kubernetes官方的EFK的yaml進行改動

es-service.yaml文件內容如下(溫馨提示,帶有叉的都是注釋行,默認原文件可能是啟用狀態)

apiVersion: v1
kind: Service
metadata:
  name: elasticsearch-logging
  namespace: kube-system
  labels:
    k8s-app: elasticsearch-logging
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "Elasticsearch"
spec:
  type: NodePort          #通過NodePort暴露端口,以便通過elasticsearch-head來連接elasticsearch查看
  ports:
  - port: 9200
    protocol: TCP
    targetPort: db
  selector:
    k8s-app: elasticsearch-logging

es-statefulset.yaml文件內容如下(溫馨提示,帶有叉的都是注釋行,默認原文件可能是啟用狀態)

# RBAC authn and authz
apiVersion: v1
kind: ServiceAccount
metadata:
  name: elasticsearch-logging
  namespace: kube-system
  labels:
    k8s-app: elasticsearch-logging
    kubernetes.io/cluster-service: "true"            #此行是新添加
    addonmanager.kubernetes.io/mode: Reconcile
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: elasticsearch-logging
  labels:
    k8s-app: elasticsearch-logging
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/cluster-service: "true"            #此行是新添加
rules:
- apiGroups:
  - ""
  resources:
  - "services"
  - "namespaces"
  - "endpoints"
  verbs:
  - "get"
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  namespace: kube-system
  name: elasticsearch-logging
  labels:
    k8s-app: elasticsearch-logging
    kubernetes.io/cluster-service: "true"          #此行是新添加
    addonmanager.kubernetes.io/mode: Reconcile
subjects:
- kind: ServiceAccount
  name: elasticsearch-logging
  namespace: kube-system
  apiGroup: ""
roleRef:
  kind: ClusterRole
  name: elasticsearch-logging
  apiGroup: ""
---
# Elasticsearch deployment itself
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: elasticsearch-logging
  namespace: kube-system
  labels:
    k8s-app: elasticsearch-logging
    version: v6.6.1
    kubernetes.io/cluster-service: "true"            #此行是新添加
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  serviceName: elasticsearch-logging
  replicas: 2
  selector:
    matchLabels:
      k8s-app: elasticsearch-logging
      version: v6.6.1
  template:
    metadata:
      labels:
        k8s-app: elasticsearch-logging
        version: v6.6.1
        kubernetes.io/cluster-service: "true"        #此行是新添加
    spec:
      serviceAccountName: elasticsearch-logging
      containers:
      - image: quay.io/fluentd_elasticsearch/elasticsearch:v6.6.1
        name: elasticsearch-logging
        imagePullPolicy: IfNotPresent       #默認為Always,修改為IfNotPresent
        resources:
          # need more cpu upon initialization, therefore burstable class
          limits:
            cpu: 1000m
         #   memory: 3Gi
          requests:
            cpu: 100m
         #   memory: 3Gi
        ports:
        - containerPort: 9200
          name: db
          protocol: TCP
        - containerPort: 9300
          name: transport
          protocol: TCP
       # livenessProbe:
       #   tcpSocket:
       #     port: transport
       #   initialDelaySeconds: 5
       #   timeoutSeconds: 10
       # readinessProbe:
       #   tcpSocket:
       #     port: transport
       #   initialDelaySeconds: 5
       #   timeoutSeconds: 10
        volumeMounts:
        - name: elasticsearch-logging
          mountPath: /data
        env:
        - name: "NAMESPACE"
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
      volumes:
      - name: elasticsearch-logging
        emptyDir: {}
      # Elasticsearch requires vm.max_map_count to be at least 262144.
      # If your OS already sets up this number to a higher value, feel free
      # to remove this init container.
      initContainers:
      - image: alpine:3.6
        command: ["/sbin/sysctl", "-w", "vm.max_map_count=262144"]
        name: elasticsearch-logging-init
        securityContext:
          privileged: true


fluentd-es-ds.yaml文件內容如下,fluentd-es-configmap.yaml文件內容保持不變(溫馨提示,帶有叉的都是注釋行,默認原文件可能是啟用狀態)

apiVersion: v1
kind: ServiceAccount
metadata:
  name: fluentd-es
  namespace: kube-system
  labels:
    k8s-app: fluentd-es
    addonmanager.kubernetes.io/mode: Reconcile
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: fluentd-es
  labels:
    k8s-app: fluentd-es
    addonmanager.kubernetes.io/mode: Reconcile
rules:
- apiGroups:
  - ""
  resources:
  - "namespaces"
  - "pods"
  verbs:
  - "get"
  - "watch"
  - "list"
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: fluentd-es
  labels:
    k8s-app: fluentd-es
    addonmanager.kubernetes.io/mode: Reconcile
subjects:
- kind: ServiceAccount
  name: fluentd-es
  namespace: kube-system
  apiGroup: ""
roleRef:
  kind: ClusterRole
  name: fluentd-es
  apiGroup: ""
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd-es-v2.4.0
  namespace: kube-system
  labels:
    k8s-app: fluentd-es
    version: v2.4.0
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  selector:
    matchLabels:
      k8s-app: fluentd-es
      version: v2.4.0
  template:
    metadata:
      labels:
        k8s-app: fluentd-es
        version: v2.4.0
      # This annotation ensures that fluentd does not get evicted if the node
      # supports critical pod annotation based priority scheme.
      # Note that this does not guarantee admission on the nodes (#40573).
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
    spec:
      priorityClassName: system-node-critical
      serviceAccountName: fluentd-es
      containers:
      - name: fluentd-es
        image: quay.io/fluentd_elasticsearch/fluentd_elasticsearch:v2.4.0  #鏡像地址一定記得修改
        env:
        - name: FLUENTD_ARGS
          value: --no-supervisor -q
        resources:
          limits:
            memory: 500Mi
          requests:
            cpu: 100m
            memory: 200Mi
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
        - name: config-volume
          mountPath: /etc/fluent/config.d
       # ports:
       # - containerPort: 24231
        #  name: prometheus
        #  protocol: TCP
        #livenessProbe:
         # tcpSocket:
         #   port: prometheus
         # initialDelaySeconds: 5
         # timeoutSeconds: 10
        #readinessProbe:
         # tcpSocket:
         #   port: prometheus
         # initialDelaySeconds: 5
         # timeoutSeconds: 10
      terminationGracePeriodSeconds: 30
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: config-volume
        configMap:
          name: fluentd-es-config-v0.2.0

kibana-deployment.yaml文件內容如下(溫馨提示,帶有叉的都是注釋行,默認原文件可能是啟用狀態)

apiVersion: apps/v1
kind: Deployment
metadata:
  name: kibana-logging
  namespace: kube-system
  labels:
    k8s-app: kibana-logging
    addonmanager.kubernetes.io/mode: Reconcile
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: kibana-logging
  template:
    metadata:
      labels:
        k8s-app: kibana-logging
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
    spec:
      containers:
      - name: kibana-logging
        image: docker.elastic.co/kibana/kibana-oss:6.6.1   #鏡像連接地址
        resources:
          # need more cpu upon initialization, therefore burstable class
          limits:
            cpu: 1000m
          requests:
            cpu: 100m
        env:
          #- name: ELASTICSEARCH_HOSTS
          - name: ELASTICSEARCH_URL
            value: http://elasticsearch-logging:9200
          #- name: SERVER_NAME
          #  value: kibana-logging
          - name: SERVER_BASEPATH
            value: ""    #kibana是通過nodeport方式進行訪問,請把value的值改為此
            #value: /api/v1/namespaces/kube-system/services/kibana-logging/proxy
         # - name: SERVER_REWRITEBASEPATH
         #   value: "false"
        ports:
        - containerPort: 5601
          name: ui
          protocol: TCP
        #livenessProbe:            #livenessProbe和readinessProbe檢測可以注釋,不需要啟用
         # httpGet:
         #   path: /api/status
         #   port: ui
         # initialDelaySeconds: 5
         # timeoutSeconds: 10
        #readinessProbe:
          #httpGet:
          #  path: /api/status
          #  port: ui
          #initialDelaySeconds: 5
          #timeoutSeconds: 10


kibana-service.yaml文件內容如下(溫馨提示,帶有叉的都是注釋行,默認原文件可能是啟用狀態)

apiVersion: v1
kind: Service
metadata:
  name: kibana-logging
  namespace: kube-system
  labels:
    k8s-app: kibana-logging
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "Kibana"
spec:
  type: NodePort        #添加此選項,以便能直接通過IP:端口的方式訪問kibana
  ports:
  - port: 5601
    protocol: TCP
    targetPort: ui
  selector:
    k8s-app: kibana-logging

5.應用EFK的yaml所有文件,我把EFK需要的所有文件都保存到一個文件夾/root/EFK

kubectl apply -f /root/EFK/

6.查看svc暴露的端口

7.可以在谷歌瀏覽器安裝elastisearch-head連接并查看

可以在谷歌瀏覽器安裝elastisearch-head連接并查看elasticsearch是否能正常連接上或有沒有報錯等之類

8.通過NodePort暴露kibana的service端口來訪問kibana

總結

以上是生活随笔為你收集整理的Kubernetes安装EFK教程(非存储持久化方式部署)的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。