使用ECK将ElasticStack安装到Kubernetes集群中,以构建日志指标展示环境

首先

在構建Kubernetes集群的同時,我嘗試使用ElasticStack來實現日誌指標顯示環境。
由於ElasticStack的Kubernetes Operator / CRD——Elastic Cloud on Kubernetes(ECK)在2020年1月16日推出了1.0版本,因此我進行了使用ECK構建的操作。
由於ECK並不涵蓋Beats,所以我需要創建相應的檔案來進行構建。

kibana_metrics_pods_logs2.gif

除了上述之外,还最后以Gif或图片的形式附上了在Kibana上的显示确认结果。

弹性云在Kubernetes上的部署环境。

弹性Kubernetes云(ECK)和其工作环境详细介绍如下,包括其功能和运行情况。

版本

弹性堆栈版本:7.6.0
弹性堆栈控制器版本:1.0.1
Kubernetes版本:1.16.3
kube-state-metrics版本:1.9.4

构成整体概述图

为了整理整体资源,编写整体概述图(部分内容例如ServiceAccount、Role、ConfigMap已被省略)。

elasticstack-k8s.png

将要创建的清单根据以下方式整理如下。

elasticstack-k8s_manifest.png

这次我们要创建一个红字的清单。
有关 PV 创建(iscsi-pv.yaml)和 iSCSI 设置的详细信息请参考我们在QNAP的PV创建和ECK的本地操作确认期间所提到的内容,此处省略。

构建步骤

按照以下步骤进行构建。

    1. 命名空间的建立

 

    1. ECK的引入

 

    1. Elasticsearch的建立

 

    1. Kibana的建立

 

    1. filebeat的建立

 

    1. metricbeat的建立

 

    auditbeat的建立
    • 1. ~ 2. : 今回 ElasticStack 配置の namespaceの作成. ECK Operator の構築

 

    • 3. ~ 4. : Elasticsearch / Kibana 環境の構築

 

    5. ~ 6. : 今回構築する 3種の beats の構築

1. 命名空间的建立

为了将其与默认和kube-system分开保存,应该在elastic-monitoring下创建一个命名空间。

kind: Namespace
apiVersion: v1
metadata:
  name: elastic-monitoring
  labels:
    name: elastic-monitoring

使用上述的清单来创建命名空间。

$ kubectl apply -f elastic-namespace.yaml 
namespace/elastic-monitoring created

2. ECK引入

引入ECK。这将建立Elasticsearch/Kibana/ApmServer 的CRD和Operator等。
(详细信息在“ECK本地操作确认”中描述,请参考那里。只有版本有所不同(以应用最新版本时的说明为准))

$ curl -OL https://download.elastic.co/downloads/eck/1.0.1/all-in-one.yaml
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 93520  100 93520    0     0  94865      0 --:--:-- --:--:-- --:--:-- 94847

$ kubectl apply -f all-in-one.yaml 
customresourcedefinition.apiextensions.k8s.io/apmservers.apm.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/elasticsearches.elasticsearch.k8s.elastic.co created
customresourcedefinition.apiextensions.k8s.io/kibanas.kibana.k8s.elastic.co created
clusterrole.rbac.authorization.k8s.io/elastic-operator created
clusterrolebinding.rbac.authorization.k8s.io/elastic-operator created
namespace/elastic-system created
statefulset.apps/elastic-operator created
serviceaccount/elastic-operator created
validatingwebhookconfiguration.admissionregistration.k8s.io/elastic-webhook.k8s.elastic.co created
service/elastic-webhook-server created
secret/elastic-webhook-server-cert created

$ kubectl get po -n elastic-system
NAME                 READY   STATUS    RESTARTS   AGE
elastic-operator-0   1/1     Running   1          3m13s

3. 构建 Elasticsearch

创建 Elasticsearch 集群。
(详细信息请参见 ECK 本地操作确认。仅仅将命名空间(namespace)和名称(name)替换为此次使用的即可。)
(假设使用 standard 存储类别,并准备了 50Gi 的存储空间。详细信息请参考 ECK 本地操作确认。)

apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: monitoring-elasticsearch
  namespace: elastic-monitoring
spec:
  version: 7.6.0
  nodeSets:
  - name: master-data
    count: 3
    config:
      node.master: true
      node.data: true
      node.ingest: true
      node.store.allow_mmap: false
    volumeClaimTemplates:
    - metadata:
        name: elasticsearch-data
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 50Gi
        storageClassName: standard

执行上述yaml中的apply操作。

$ kubectl apply -f elasticsearch.yaml 
elasticsearch.elasticsearch.k8s.elastic.co/monitoring-elasticsearch created

$ kubectl get po -n elastic-monitoring
NAME                                        READY   STATUS    RESTARTS   AGE
monitoring-elasticsearch-es-master-data-0   1/1     Running   0          53s
monitoring-elasticsearch-es-master-data-1   1/1     Running   0          53s
monitoring-elasticsearch-es-master-data-2   1/1     Running   0          52s

$ kubectl get es -n elastic-monitoring
NAME                       HEALTH   NODES   VERSION   PHASE   AGE
monitoring-elasticsearch   green    3       7.6.0     Ready   79s

4. Kibana 建设

构建Kibana。

(详细步骤在ECK单机机房操作确认中有详述,仅需将命名空间和名称修改为本次使用的即可。)

apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
  name: monitoring-kibana
  namespace: elastic-monitoring
spec:
  version: 7.6.0
  count: 1
  elasticsearchRef:
    name: monitoring-elasticsearch

应用上述的YAML文件。

$ kubectl apply -f kibana.yaml 
kibana.kibana.k8s.elastic.co/monitoring-kibana created

$ kubectl get kibana -n elastic-monitoring
NAME                HEALTH   NODES   VERSION   AGE
monitoring-kibana   green    1       7.6.0     117s

获取登录密码。

$ kubectl get secret monitoring-elasticsearch-es-elastic-user -n elastic-monitoring -o=jsonpath='{.data.elastic}' | base64 --decode; echo
s2gqmsd5vxbknqlqpvsjmztg

进行端口转发。

$ kubectl port-forward service/monitoring-kibana-kb-http 5601 -n elastic-monitoring
Forwarding from 127.0.0.1:5601 -> 5601
Forwarding from [::1]:5601 -> 5601

通过浏览器在https://127.0.0.1:5601上,使用用户名elastic和在上述位置获取的密码(例如:s2gqmsd5vxbknqlqpvsjmztg)可以登录。(登录示例和图像等详见ECK本地操作验证页面。)

5. 建立 filebeat

为了进行日志收集,我们引入filebeat。
根据Elastic官方提供的用于Kubernetes的yaml文件进行修改。

下载基础的 YAML 文件。

curl -L -O https://raw.githubusercontent.com/elastic/beats/7.6/deploy/kubernetes/filebeat-kubernetes.yaml

我們將進行以下更改。

5.1 命名空间的更改

默认情况下指定为kube-system,需要更改为elastic-monitoring。

$ sed -e 's/namespace: kube-system/namespace: elastic-monitoring/g' filebeat-kubernetes.yaml > filebeat.yaml 

指定Elasticsearch/Kibana主机、添加认证、添加密钥引用

由ECK创建了Elasticsearch、Kibana的Service和Secret,因此需要指定它们。

    • elasticsearch

Service: monitoring-elasticsearch-es-http

[name]-es-http で作成される

Secret: monitoring-elasticsearch-es-http-certs-public

[name]-es-http-certs-public で作成される

kibana

Service: monitoring-kibana-kb-http

[name]-kb-http で作成される

Secret: monitoring-kibana-kb-http-certs-public

[name]-kb-http-certs-public で作成される

守护进程集

        env:
        - name: ELASTICSEARCH_HOST
          value: monitoring-elasticsearch-es-http
        - name: ELASTICSEARCH_PORT
          value: "9200"
        - name: ELASTICSEARCH_USERNAME
          value: elastic
        - name: ELASTICSEARCH_PASSWORD
          valueFrom:
            secretKeyRef:
              key: elastic
              name: monitoring-elasticsearch-es-elastic-user
        - name: KIBANA_HOST
          value: monitoring-kibana-kb-http
~~中略~~
        volumeMounts:
~~中略~~
        - name: es-certs
          mountPath: /mnt/elastic/tls.crt
          readOnly: true
          subPath: tls.crt
        - name: kb-certs
          mountPath: /mnt/kibana/tls.crt
          readOnly: true
          subPath: tls.crt
~~中略~~
      volumes:
~~中略~~
      - name: es-certs
        secret:
          secretName: monitoring-elasticsearch-es-http-certs-public
      - name: kb-certs
        secret:
          secretName: monitoring-kibana-kb-http-certs-public

配置映射

data:
  filebeat.yml: |-
~~中略~~
    output.elasticsearch:
      hosts: ['https://${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
      username: ${ELASTICSEARCH_USERNAME}
      password: ${ELASTICSEARCH_PASSWORD}
      ssl.certificate_authorities:
      - /mnt/elastic/tls.crt

    setup.dashboards.enabled: true

    setup.kibana:
      host: "https://${KIBANA_HOST}:5601"
      username: ${ELASTICSEARCH_USERNAME}
      password: ${ELASTICSEARCH_PASSWORD}
      protocol: "https"
      ssl.certificate_authorities:
        - /mnt/kibana/tls.crt

5.3 tolerations 从主节点上进行追加获取

希望获取MasterNode的数据,因此需要设置tolerations。

守护进程集

    spec:
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

自动发现5.4

由于Kubernetes拥有自动发现功能,因此需要将其启用。

配置映射

按照默认设置,取消注释filebeat.input,同时将被注释的filebeat.autodiscover的注释解除。

  filebeat.yml: |-
    # filebeat.inputs:
    # - type: container
    #   paths:
    #     - /var/log/containers/*.log
    #   processors:
    #     - add_kubernetes_metadata:
    #         host: ${NODE_NAME}
    #         matchers:
    #         - logs_path:
    #             logs_path: "/var/log/containers/"

    # To enable hints based autodiscover, remove `filebeat.inputs` configuration and uncomment this:
    filebeat.autodiscover:
      providers:
       - type: kubernetes
         host: ${NODE_NAME}
         hints.enabled: true
         hints.default_config:
           type: container
           paths:
             - /var/log/containers/*${data.kubernetes.container.id}.log

增加了5.5模块

添加syslog、auth模块。

配置映射

  filebeat.yml: |-
~~中略~~
    filebeat.config:
      modules:
        path: ${path.config}/modules.d/*.yml
        reload.enabled: false

    filebeat.modules:
      - module: system
        syslog:
          enabled: true
          var.paths: ["/var/log/messages"]
          var.convert_timezone: true
        auth:
          enabled: true
          var.paths: ["/var/log/secure"]
          var.convert_timezone: true

5.6 时区的适应性

如果继续下去,它将变成UTC的时区,因此需要挂载服务器的本地时间来反映JST时间 (但是根据Kibana的显示方式,有时会显示UTC时间的现在…这可以改进)。

配置映射

  filebeat.yml: |-
~~中略~~
    spec:
        volumeMounts:
        - name: localtime
          mountPath: /etc/localtime
          readOnly: true
~~中略~~
      volumes:
      - name: localtime
        hostPath:
          path: /etc/localtime
          type: File

5.7 更改后的yaml文件。应用

适应先前的更改的 YAML 如下所示:

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-config
  namespace: elastic-monitoring
  labels:
    k8s-app: filebeat
data:
  filebeat.yml: |-
    # filebeat.inputs:
    # - type: container
    #   paths:
    #     - /var/log/containers/*.log
    #   processors:
    #     - add_kubernetes_metadata:
    #         host: ${NODE_NAME}
    #         matchers:
    #         - logs_path:
    #             logs_path: "/var/log/containers/"

    # To enable hints based autodiscover, remove `filebeat.inputs` configuration and uncomment this:
    filebeat.autodiscover:
     providers:
       - type: kubernetes
         node: ${NODE_NAME}
         hints.enabled: true
         hints.default_config:
           type: container
           paths:
             - /var/log/containers/*${data.kubernetes.container.id}.log

    processors:
      - add_cloud_metadata:
      - add_host_metadata:
      - add_locale: ~

    cloud.id: ${ELASTIC_CLOUD_ID}
    cloud.auth: ${ELASTIC_CLOUD_AUTH}

    output.elasticsearch:
      hosts: ['https://${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
      username: ${ELASTICSEARCH_USERNAME}
      password: ${ELASTICSEARCH_PASSWORD}
      ssl.certificate_authorities:
      - /mnt/elastic/tls.crt

    setup.dashboards.enabled: true

    setup.kibana:
      host: "https://${KIBANA_HOST}:5601"
      username: ${ELASTICSEARCH_USERNAME}
      password: ${ELASTICSEARCH_PASSWORD}
      protocol: "https"
      ssl.certificate_authorities:
        - /mnt/kibana/tls.crt

    filebeat.config:
      modules:
        path: ${path.config}/modules.d/*.yml
        reload.enabled: false

    filebeat.modules:
      - module: system
        syslog:
          enabled: true
          var.paths: ["/var/log/messages"]
          var.convert_timezone: true
        auth:
          enabled: true
          var.paths: ["/var/log/secure"]
          var.convert_timezone: true
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: filebeat
  namespace: elastic-monitoring
  labels:
    k8s-app: filebeat
spec:
  selector:
    matchLabels:
      k8s-app: filebeat
  template:
    metadata:
      labels:
        k8s-app: filebeat
    spec:
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      serviceAccountName: filebeat
      terminationGracePeriodSeconds: 30
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      containers:
      - name: filebeat
        image: docker.elastic.co/beats/filebeat:7.6.0
        args: [
          "-c", "/etc/filebeat.yml",
          "-e",
        ]
        env:
        - name: ELASTICSEARCH_HOST
          value: monitoring-elasticsearch-es-http
        - name: ELASTICSEARCH_PORT
          value: "9200"
        - name: ELASTICSEARCH_USERNAME
          value: elastic
        - name: ELASTICSEARCH_PASSWORD
          valueFrom:
            secretKeyRef:
              key: elastic
              name: monitoring-elasticsearch-es-elastic-user
        - name: KIBANA_HOST
          value: monitoring-kibana-kb-http
        - name: ELASTIC_CLOUD_ID
          value:
        - name: ELASTIC_CLOUD_AUTH
          value:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        securityContext:
          runAsUser: 0
          # If using Red Hat OpenShift uncomment this:
          #privileged: true
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
        volumeMounts:
        - name: config
          mountPath: /etc/filebeat.yml
          readOnly: true
          subPath: filebeat.yml
        - name: data
          mountPath: /usr/share/filebeat/data
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
        - name: varlog
          mountPath: /var/log
          readOnly: true
        - name: es-certs
          mountPath: /mnt/elastic/tls.crt
          readOnly: true
          subPath: tls.crt
        - name: kb-certs
          mountPath: /mnt/kibana/tls.crt
          readOnly: true
          subPath: tls.crt
        - name: localtime
          mountPath: /etc/localtime
          readOnly: true
      volumes:
      - name: config
        configMap:
          defaultMode: 0600
          name: filebeat-config
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: varlog
        hostPath:
          path: /var/log
      # data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
      - name: data
        hostPath:
          path: /var/lib/filebeat-data
          type: DirectoryOrCreate
      - name: es-certs
        secret:
          secretName: monitoring-elasticsearch-es-http-certs-public
      - name: kb-certs
        secret:
          secretName: monitoring-kibana-kb-http-certs-public
      - name: localtime
        hostPath:
          path: /etc/localtime
          type: File
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: filebeat
subjects:
- kind: ServiceAccount
  name: filebeat
  namespace: elastic-monitoring
roleRef:
  kind: ClusterRole
  name: filebeat
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: filebeat
  labels:
    k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
  resources:
  - namespaces
  - pods
  verbs:
  - get
  - watch
  - list
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: filebeat
  namespace: elastic-monitoring
  labels:
    k8s-app: filebeat
---

执行以上 yaml。

$ kubectl apply -f filebeat.yaml 
configmap/filebeat-config created
daemonset.apps/filebeat created
clusterrolebinding.rbac.authorization.k8s.io/filebeat created
clusterrole.rbac.authorization.k8s.io/filebeat created
serviceaccount/filebeat created

$ kubectl get po -n elastic-monitoring | grep filebeat
filebeat-4gqlc                              1/1     Running   3          83s
filebeat-h2zh2                              1/1     Running   3          83s
filebeat-lmb4f                              1/1     Running   3          83s
filebeat-ngfrx                              1/1     Running   3          83s
filebeat-ngnwt                              1/1     Running   3          83s
filebeat-pmjdh                              1/1     Running   0          83s
filebeat-tk4g6                              1/1     Running   0          83s
filebeat-xwxv6                              1/1     Running   0          83s

6. 建立metricbeat

为了获得指标数据,引入 metricbeat。由于 metricbeat 的先决条件是需要安装 kube-state-metrics,因此也需要引入该组件。

根据提供的 elastic 公式的 Kubernetes 用 yaml 进行修改。

下载基础的 yaml 文件。

curl -L -O https://raw.githubusercontent.com/elastic/beats/7.6/deploy/kubernetes/metricbeat-kubernetes.yaml

我們將逐步進行以下更改。

6.0 Kube状态指标

从Github上克隆kube-state-metrics。

git clone https://github.com/kubernetes/kube-state-metrics.git

复制示例清单。

cp -Rp kube-state-metrics/examples/standard/ elastic-kube-state-metrics

我要修正以下4个复制文件的命名空间。

    • 変更箇所

namespace: kube-system -> namespace: elastic-monitoring

$ sed -i -e 's/namespace: kube-system/namespace: elastic-monitoring/g' elastic-kube-state-metrics/*.yaml

kube-state-metrics 名称更改 (2020年03月14日更新)

如果保持现状不变,当在 kube-prometheus 或其他方式上安装 Prometheus 时,kube-state-metrics 和 namespaces 无关的 clusterrolebinding 和 clusterrole 将变成相同的内容,可能会因为删除其中一个而同时消失。为了避免影响,需要修改 kube-state-metrics 的名称。

    • 変更箇所

kube-state-metrics -> elastic-kube-state-metrics

ただし、イメージ名は kube-state-metrics のまま

$ sed -i -e 's/kube-state-metrics/elastic-kube-state-metrics/g' elastic-kube-state-metrics/*.yaml
$ sed -i -e 's/quay.io\/coreos\/elastic-kube-state-metrics/quay.io\/coreos\/kube-state-metrics/g' elastic-kube-state-metrics/deployment.yaml

变更后,将整个文件夹应用并在elastic-monitoring命名空间上构建kube-state-metrics。

$ kubectl apply -f elastic-kube-state-metrics/
clusterrolebinding.rbac.authorization.k8s.io/elastic-kube-state-metrics created
clusterrole.rbac.authorization.k8s.io/elastic-kube-state-metrics created
deployment.apps/elastic-kube-state-metrics created
serviceaccount/elastic-kube-state-metrics created
service/elastic-kube-state-metrics created

确认kube-state-metrics已经完成。

$ kubectl get po -n elastic-monitoring | grep kube-state
elastic-kube-state-metrics-547876f486-7v892   1/1     Running   0          60s

6.1 命名空间的更改

由于类似于filebeats的原因,除了下面的命令外,其他内容省略。

$ sed -e 's/namespace: kube-system/namespace: elastic-monitoring/g' metricbeat-kubernetes.yaml > metricbeat.yaml 

指定Elasticsearch/Kibana主机、添加认证、添加秘密的参考。

由于和Filebeat类似,省略此步骤。
因为Metricbeat有类似的daemonset和deployment两种资源,所以两者都要实施。

在6.3版本中,从主节点获取也会添加tolerations。

因为与Filebeat类似,所以省略。
※ 由于Metricbeat有类似的守护进程集和部署的两种资源,因此只执行守护进程集操作。

6.4 自动发现

由于与filebeat几乎相同,可以省略。

6.5 在 demonset kubernetes.yml 文件中添加设置

如果继续保持现状,将无法获取Pod的度量数据,因此需要添加以下内容。
(尽管在GitHub的最新版本中进行了修正,但在2020.02.23时使用curl获取时还没有下面的内容)

配置映射(仅适用于守护进程集)。

更改之前

  kubernetes.yml: |-
      metricsets:
        - node
        - system
        - pod
        - container
        - volume
      period: 10s
      host: ${NODE_NAME}
      hosts: ["localhost:10255"]

更改之后 zhī

  kubernetes.yml: |-
    - module: kubernetes
      metricsets:
        - node
        - system
        - pod
        - container
        - volume
      period: 10s
      host: ${NODE_NAME}
      hosts: ["https://${HOSTNAME}:10250"]
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      ssl.verification_mode: "none"

6.6 kube-state-metrics 更名 (2020.03.14更新)

由于 kube-state-metrics 的名称已更改,因此需要相应地修改 metricbeat 中指定的部分。

之前的变更

apiVersion: v1
kind: ConfigMap
metadata:
  name: metricbeat-deployment-modules
〜〜中略〜〜
      hosts: ["kube-state-metrics:8080"]

更改之后

apiVersion: v1
kind: ConfigMap
metadata:
  name: metricbeat-deployment-modules
〜〜中略〜〜
      hosts: ["elastic-kube-state-metrics:8080"]

修改后的yaml文件。应用。

以下是已应用之前更改的yaml格式。

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: metricbeat-daemonset-config
  namespace: elastic-monitoring
  labels:
    k8s-app: metricbeat
data:
  metricbeat.yml: |-
    metricbeat.config.modules:
      # Mounted `metricbeat-daemonset-modules` configmap:
      path: ${path.config}/modules.d/*.yml
      # Reload module configs as they change:
      reload.enabled: false

    # To enable hints based autodiscover uncomment this:
    metricbeat.autodiscover:
     providers:
       - type: kubernetes
         node: ${NODE_NAME}
         hints.enabled: true

    processors:
      - add_cloud_metadata:

    cloud.id: ${ELASTIC_CLOUD_ID}
    cloud.auth: ${ELASTIC_CLOUD_AUTH}

    output.elasticsearch:
      hosts: ['https://${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
      username: ${ELASTICSEARCH_USERNAME}
      password: ${ELASTICSEARCH_PASSWORD}
      ssl.certificate_authorities:
        - /mnt/elastic/tls.crt

    setup.kibana:
      host: "https://${KIBANA_HOST}:5601"
      username: ${ELASTICSEARCH_USERNAME}
      password: ${ELASTICSEARCH_PASSWORD}
      protocol: "https"
      ssl.certificate_authorities:
        - /mnt/kibana/tls.crt
    setup.dashboards:
      enabled: true

    xpack.monitoring.enabled: true
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: metricbeat-daemonset-modules
  namespace: elastic-monitoring
  labels:
    k8s-app: metricbeat
data:
  system.yml: |-
    - module: system
      period: 10s
      metricsets:
        - cpu
        - load
        - memory
        - network
        - process
        - process_summary
        #- core
        #- diskio
        #- socket
      processes: ['.*']
      process.include_top_n:
        by_cpu: 5      # include top 5 processes by CPU
        by_memory: 5   # include top 5 processes by memory

    - module: system
      period: 1m
      metricsets:
        - filesystem
        - fsstat
      processors:
      - drop_event.when.regexp:
          system.filesystem.mount_point: '^/(sys|cgroup|proc|dev|etc|host|lib)($|/)'
  kubernetes.yml: |-
    - module: kubernetes
      metricsets:
        - node
        - system
        - pod
        - container
        - volume
      period: 10s
      host: ${NODE_NAME}
      hosts: ["https://${HOSTNAME}:10250"]
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      ssl.verification_mode: "none"
      # If using Red Hat OpenShift remove the previous hosts entry and
      # uncomment these settings:
      #hosts: ["https://${HOSTNAME}:10250"]
      #bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      #ssl.certificate_authorities:
        #- /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt
    - module: kubernetes
      metricsets:
        - proxy
      period: 10s
      host: ${NODE_NAME}
      hosts: ["localhost:10249"]
---
# Deploy a Metricbeat instance per node for node metrics retrieval
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: metricbeat
  namespace: elastic-monitoring
  labels:
    k8s-app: metricbeat
spec:
  selector:
    matchLabels:
      k8s-app: metricbeat
  template:
    metadata:
      labels:
        k8s-app: metricbeat
    spec:
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      serviceAccountName: metricbeat
      terminationGracePeriodSeconds: 30
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      containers:
      - name: metricbeat
        image: docker.elastic.co/beats/metricbeat:7.6.0
        args: [
          "-c", "/etc/metricbeat.yml",
          "-e",
          "-system.hostfs=/hostfs",
        ]
        env:
        - name: ELASTICSEARCH_HOST
          value: monitoring-elasticsearch-es-http
        - name: ELASTICSEARCH_PORT
          value: "9200"
        - name: ELASTICSEARCH_USERNAME
          value: elastic
        - name: ELASTICSEARCH_PASSWORD
          # value: changeme
          valueFrom:
            secretKeyRef:
              key: elastic
              name: monitoring-elasticsearch-es-elastic-user
        - name: KIBANA_HOST
          value: monitoring-kibana-kb-http
        - name: ELASTIC_CLOUD_ID
          value:
        - name: ELASTIC_CLOUD_AUTH
          value:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        securityContext:
          runAsUser: 0
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
        volumeMounts:
        - name: config
          mountPath: /etc/metricbeat.yml
          readOnly: true
          subPath: metricbeat.yml
        - name: modules
          mountPath: /usr/share/metricbeat/modules.d
          readOnly: true
        - name: dockersock
          mountPath: /var/run/docker.sock
        - name: proc
          mountPath: /hostfs/proc
          readOnly: true
        - name: cgroup
          mountPath: /hostfs/sys/fs/cgroup
          readOnly: true
        - name: es-certs
          mountPath: /mnt/elastic/tls.crt
          readOnly: true
          subPath: tls.crt
        - name: kb-certs
          mountPath: /mnt/kibana/tls.crt
          readOnly: true
          subPath: tls.crt
      volumes:
      - name: proc
        hostPath:
          path: /proc
      - name: cgroup
        hostPath:
          path: /sys/fs/cgroup
      - name: dockersock
        hostPath:
          path: /var/run/docker.sock
      - name: config
        configMap:
          defaultMode: 0600
          name: metricbeat-daemonset-config
      - name: modules
        configMap:
          defaultMode: 0600
          name: metricbeat-daemonset-modules
      - name: data
        hostPath:
          path: /var/lib/metricbeat-data
          type: DirectoryOrCreate
      - name: es-certs
        secret:
          secretName: monitoring-elasticsearch-es-http-certs-public
      - name: kb-certs
        secret:
          secretName: monitoring-kibana-kb-http-certs-public
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: metricbeat-deployment-config
  namespace: elastic-monitoring
  labels:
    k8s-app: metricbeat
data:
  metricbeat.yml: |-
    metricbeat.config.modules:
      # Mounted `metricbeat-daemonset-modules` configmap:
      path: ${path.config}/modules.d/*.yml
      # Reload module configs as they change:
      reload.enabled: false

    processors:
      - add_cloud_metadata:

    cloud.id: ${ELASTIC_CLOUD_ID}
    cloud.auth: ${ELASTIC_CLOUD_AUTH}

    output.elasticsearch:
      hosts: ['https://${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
      username: ${ELASTICSEARCH_USERNAME}
      password: ${ELASTICSEARCH_PASSWORD}
      ssl.certificate_authorities:
        - /mnt/elastic/tls.crt

    setup.kibana:
      host: "https://${KIBANA_HOST}:5601"
      username: ${ELASTICSEARCH_USERNAME}
      password: ${ELASTICSEARCH_PASSWORD}
      protocol: "https"
      ssl.certificate_authorities:
        - /mnt/kibana/tls.crt
    setup.dashboards:
      enabled: true
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: metricbeat-deployment-modules
  namespace: elastic-monitoring
  labels:
    k8s-app: metricbeat
data:
  # This module requires `kube-state-metrics` up and running under `kube-system` namespace
  kubernetes.yml: |-
    - module: kubernetes
      metricsets:
        - state_node
        - state_deployment
        - state_replicaset
        - state_pod
        - state_container
        - state_cronjob
        - state_resourcequota
        # Uncomment this to get k8s events:
        #- event
      period: 10s
      host: ${NODE_NAME}
      hosts: ["elastic-kube-state-metrics:8080"]
---
# Deploy singleton instance in the whole cluster for some unique data sources, like kube-state-metrics
apiVersion: apps/v1
kind: Deployment
metadata:
  name: metricbeat
  namespace: elastic-monitoring
  labels:
    k8s-app: metricbeat
spec:
  selector:
    matchLabels:
      k8s-app: metricbeat
  template:
    metadata:
      labels:
        k8s-app: metricbeat
    spec:
      serviceAccountName: metricbeat
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      containers:
      - name: metricbeat
        image: docker.elastic.co/beats/metricbeat:7.6.0
        args: [
          "-c", "/etc/metricbeat.yml",
          "-e",
        ]
        env:
        - name: ELASTICSEARCH_HOST
          value: monitoring-elasticsearch-es-http
        - name: ELASTICSEARCH_PORT
          value: "9200"
        - name: ELASTICSEARCH_USERNAME
          value: elastic
        - name: ELASTICSEARCH_PASSWORD
          # value: changeme
          valueFrom:
            secretKeyRef:
              key: elastic
              name: monitoring-elasticsearch-es-elastic-user
        - name: KIBANA_HOST
          value: monitoring-kibana-kb-http
        - name: ELASTIC_CLOUD_ID
          value:
        - name: ELASTIC_CLOUD_AUTH
          value:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        securityContext:
          runAsUser: 0
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
        volumeMounts:
        - name: config
          mountPath: /etc/metricbeat.yml
          readOnly: true
          subPath: metricbeat.yml
        - name: modules
          mountPath: /usr/share/metricbeat/modules.d
          readOnly: true
        - name: es-certs
          mountPath: /mnt/elastic/tls.crt
          readOnly: true
          subPath: tls.crt
        - name: kb-certs
          mountPath: /mnt/kibana/tls.crt
          readOnly: true
          subPath: tls.crt
      volumes:
      - name: config
        configMap:
          defaultMode: 0600
          name: metricbeat-deployment-config
      - name: modules
        configMap:
          defaultMode: 0600
          name: metricbeat-deployment-modules
      - name: es-certs
        secret:
          secretName: monitoring-elasticsearch-es-http-certs-public
      - name: kb-certs
        secret:
          secretName: monitoring-kibana-kb-http-certs-public
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: metricbeat
subjects:
- kind: ServiceAccount
  name: metricbeat
  namespace: elastic-monitoring
roleRef:
  kind: ClusterRole
  name: metricbeat
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: metricbeat
  labels:
    k8s-app: metricbeat
rules:
- apiGroups: [""]
  resources:
  - nodes
  - namespaces
  - events
  - pods
  verbs: ["get", "list", "watch"]
- apiGroups: ["extensions"]
  resources:
  - replicasets
  verbs: ["get", "list", "watch"]
- apiGroups: ["apps"]
  resources:
  - statefulsets
  - deployments
  verbs: ["get", "list", "watch"]
- apiGroups:
  - ""
  resources:
  - nodes/stats
  verbs:
  - get
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: metricbeat
  namespace: elastic-monitoring
  labels:
    k8s-app: metricbeat
---

应用上述的 YAML 文件。

$ kubectl apply -f metricbeat.yaml 
configmap/metricbeat-daemonset-config created
configmap/metricbeat-daemonset-modules created
daemonset.apps/metricbeat created
configmap/metricbeat-deployment-config created
configmap/metricbeat-deployment-modules created
deployment.apps/metricbeat created
clusterrolebinding.rbac.authorization.k8s.io/metricbeat created
clusterrole.rbac.authorization.k8s.io/metricbeat created
serviceaccount/metricbeat created

$ kubectl get po -n elastic-monitoring | grep metricbeat
metricbeat-57jpz                            1/1     Running   0          15s
metricbeat-67b75b56b5-4r9jn                 1/1     Running   0          15s
metricbeat-8kmg7                            1/1     Running   0          15s
metricbeat-fwfmn                            1/1     Running   0          15s
metricbeat-jckss                            1/1     Running   0          15s
metricbeat-r9vkj                            1/1     Running   0          15s
metricbeat-rrm69                            1/1     Running   0          15s
metricbeat-sx5b8                            1/1     Running   0          15s
metricbeat-wq498                            1/1     Running   0          15s

7. 建立 auditbeat

可以在 Kubernetes 上使用 Auditbeat Docker 镜像来检查文件完整性。

$ curl -L -O https://raw.githubusercontent.com/elastic/beats/7.6/deploy/kubernetes/auditbeat-kubernetes.yaml
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  4288  100  4288    0     0   9329      0 --:--:-- --:--:-- --:--:--  9342
$ sed -e 's/namespace: kube-system/namespace: elastic-monitoring/g' auditbeat-kubernetes.yaml > auditbeat.yaml 

在filebeat中提到的namespace,出于认证和容差方面的考虑,进行了以下修改。

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: auditbeat-config
  namespace: elastic-monitoring
  labels:
    k8s-app: auditbeat
data:
  auditbeat.yml: |-
    auditbeat.config.modules:
      # Mounted `auditbeat-daemonset-modules` configmap:
      path: ${path.config}/modules.d/*.yml
      # Reload module configs as they change:
      reload.enabled: false

    processors:
      - add_cloud_metadata:

    cloud.id: ${ELASTIC_CLOUD_ID}
    cloud.auth: ${ELASTIC_CLOUD_AUTH}

    output.elasticsearch:
      hosts: ['https://${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
      username: ${ELASTICSEARCH_USERNAME}
      password: ${ELASTICSEARCH_PASSWORD}
      ssl.certificate_authorities:
        - /mnt/elastic/tls.crt

    setup.kibana:
      host: "https://${KIBANA_HOST}:5601"
      username: ${ELASTICSEARCH_USERNAME}
      password: ${ELASTICSEARCH_PASSWORD}
      protocol: "https"
      ssl.certificate_authorities:
        - /mnt/kibana/tls.crt
    setup.dashboards:
      enabled: true

    xpack.monitoring.enabled: true
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: auditbeat-daemonset-modules
  namespace: elastic-monitoring
  labels:
    k8s-app: auditbeat
data:
  system.yml: |-
    - module: file_integrity
      paths:
      - /hostfs/bin
      - /hostfs/usr/bin
      - /hostfs/sbin
      - /hostfs/usr/sbin
      - /hostfs/etc
      exclude_files:
      - '(?i)\.sw[nop]$'
      - '~$'
      - '/\.git($|/)'
      scan_at_start: true
      scan_rate_per_sec: 50 MiB
      max_file_size: 100 MiB
      hash_types: [sha1]
      recursive: true
---
# Deploy a auditbeat instance per node for node metrics retrieval
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: auditbeat
  namespace: elastic-monitoring
  labels:
    k8s-app: auditbeat
spec:
  selector:
    matchLabels:
      k8s-app: auditbeat
  template:
    metadata:
      labels:
        k8s-app: auditbeat
    spec:
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      serviceAccountName: auditbeat
      terminationGracePeriodSeconds: 30
      hostNetwork: true
      dnsPolicy: ClusterFirstWithHostNet
      containers:
      - name: auditbeat
        image: docker.elastic.co/beats/auditbeat:7.6.0
        args: [
          "-c", "/etc/auditbeat.yml"
        ]
        env:
        - name: ELASTICSEARCH_HOST
          value: monitoring-elasticsearch-es-http
        - name: ELASTICSEARCH_PORT
          value: "9200"
        - name: ELASTICSEARCH_USERNAME
          value: elastic
        - name: ELASTICSEARCH_PASSWORD
          valueFrom:
            secretKeyRef:
              key: elastic
              name: monitoring-elasticsearch-es-elastic-user
        - name: KIBANA_HOST
          value: monitoring-kibana-kb-http
        - name: ELASTIC_CLOUD_ID
          value:
        - name: ELASTIC_CLOUD_AUTH
          value:
        securityContext:
          runAsUser: 0
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 100Mi
        volumeMounts:
        - name: config
          mountPath: /etc/auditbeat.yml
          readOnly: true
          subPath: auditbeat.yml
        - name: modules
          mountPath: /usr/share/auditbeat/modules.d
          readOnly: true
        - name: bin
          mountPath: /hostfs/bin
          readOnly: true
        - name: sbin
          mountPath: /hostfs/sbin
          readOnly: true
        - name: usrbin
          mountPath: /hostfs/usr/bin
          readOnly: true
        - name: usrsbin
          mountPath: /hostfs/usr/sbin
          readOnly: true
        - name: etc
          mountPath: /hostfs/etc
          readOnly: true
        - name: es-certs
          mountPath: /mnt/elastic/tls.crt
          readOnly: true
          subPath: tls.crt
        - name: kb-certs
          mountPath: /mnt/kibana/tls.crt
          readOnly: true
          subPath: tls.crt
      volumes:
      - name: bin
        hostPath:
          path: /bin
      - name: usrbin
        hostPath:
          path: /usr/bin
      - name: sbin
        hostPath:
          path: /sbin
      - name: usrsbin
        hostPath:
          path: /usr/sbin
      - name: etc
        hostPath:
          path: /etc
      - name: config
        configMap:
          defaultMode: 0600
          name: auditbeat-config
      - name: modules
        configMap:
          defaultMode: 0600
          name: auditbeat-daemonset-modules
      - name: data
        hostPath:
          path: /var/lib/auditbeat-data
          type: DirectoryOrCreate
      - name: es-certs
        secret:
          secretName: monitoring-elasticsearch-es-http-certs-public
      - name: kb-certs
        secret:
          secretName: monitoring-kibana-kb-http-certs-public
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: auditbeat
subjects:
- kind: ServiceAccount
  name: auditbeat
  namespace: elastic-monitoring
roleRef:
  kind: ClusterRole
  name: auditbeat
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: auditbeat
  labels:
    k8s-app: auditbeat
rules:
- apiGroups: [""]
  resources:
  - nodes
  - namespaces
  - pods
  verbs: ["get", "list", "watch"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: auditbeat
  namespace: elastic-monitoring
  labels:
    k8s-app: auditbeat
---

应用上述的 yaml 文件。

$ kubectl apply -f auditbeat.yaml 
configmap/auditbeat-config unchanged
configmap/auditbeat-daemonset-modules unchanged
daemonset.apps/auditbeat created
clusterrolebinding.rbac.authorization.k8s.io/auditbeat created
clusterrole.rbac.authorization.k8s.io/auditbeat created
serviceaccount/auditbeat created

$ kubectl get po -n elastic-monitoring | grep audit
auditbeat-5s6rh                             1/1     Running   0          53s
auditbeat-6xrkc                             1/1     Running   0          53s
auditbeat-846pz                             1/1     Running   0          53s
auditbeat-8szhp                             1/1     Running   0          53s
auditbeat-9kqsf                             1/1     Running   0          53s
auditbeat-njf45                             1/1     Running   0          53s
auditbeat-v7swg                             1/1     Running   0          53s
auditbeat-vx4hv                             1/1     Running   0          53s

在Kibana中进行显示确认

构建步骤完成后,可以在 Kibana 上显示各种内容进行确认。

度量标准

通过查看指标,可以按需要在日志中浏览。

    • Host, Pod 単位での CPU/Memory/Traffic などのメトリクス表示が可能

 

    • Pod は Namespace, Node などに囲って表示も可能

 

    Host, Pod 単位で Log 確認が可能
kibana_metrics_node.gif
kibana_metrics_pods.gif
kibana_metrics_pods_metrics.gif
kibana_metrics_pods_logs.gif

仪表盘[文件搜集系统] Syslog 仪表盘 ECS

    • syslogがノード単位で取得できていることが確認できる

 

    ノードへのログイン履歴等も確認可能

Syslog 表示 (按节点单位)

スクリーンショット 2020-02-23 22.00.36.png
スクリーンショット 2020-02-23 22.00.56.png

仪表盘【Metricbeat Kubernetes】概览- ECS

スクリーンショット 2020-02-23 22.01.56.png

仪表板【审计节拍文件完整性】概览ECS

スクリーンショット 2020-02-23 22.04.47.png

综上所述

使用 ElasticStack 成功搭建了 Kubernetes 集群日志和指标显示环境。
现在,当在 Kubernetes 上发生节点或 Pod 故障时,可以通过查看指标来有针对性地查看相关日志。
未来计划通过单独搭建 filebeat 等组件并与此环境进行集成,来获取和可视化外部的 Netflow、日志等内容。

广告
将在 10 秒后关闭
bannerAds