How does Filebeat collect logs from Kubernetes?

In order to collect Kubernetes logs using Filebeat, you need to configure Filebeat to monitor Kubernetes log files or container logs.

Here is an example of a Filebeat configuration file:

filebeat.inputs:
- type: container
  paths:
    - /var/log/containers/*.log
  processors:
    - add_kubernetes_metadata:
        in_cluster: true
        matchers:
          - logs_path:
              logs_path: "/var/log/containers/"

output.elasticsearch:
  hosts: ["your_elasticsearch_host:9200"]

In the above configuration, the type is set to container and specifies the log path to monitor (/var/log/containers/*.log). The processors section uses the add_kubernetes_metadata processor to add Kubernetes metadata to the log events, helping to differentiate between different container logs.

The output.elasticsearch section specifies the host address of Elasticsearch.

After the configuration is complete, launch Filebeat and begin monitoring the log files of Kubernetes. Filebeat will collect the logs and send them to Elasticsearch.

Please note that the above examples are based on the assumption that Filebeat is already installed in the Kubernetes cluster. If Filebeat is not in the cluster, you will need to specify the Kubernetes API address so that Filebeat can retrieve metadata about containers and Pods. This can be achieved by setting add_kubernetes_metadata.in_cluster to false in the processors section and specifying add_kubernetes_metadata.host as the Kubernetes API address.

bannerAds