使用fluentd+Elasticsearch+Kibana将Apache的日志进行可视化

首先

最初我在这里将Elasticsearch和Kibana进行了连接。
虽然之后继续学习了,但由于没有发布,所以在回忆的基础上逐渐补充写下。

总览

主机操作系统

使用Apache、MySQL和Drupal构建的系统正在运行。
Fluentd将Apache的访问日志传输到客户操作系统。

来宾操作系统

Fluentd、Elasticsearch和Kibana正在运行。
存储来自主机操作系统的日志到Elasticsearch。
另外,在Kibana中可视化存储的信息。
(可视化操作可在主机操作系统的浏览器中完成)

环境

[主机操作系统] OS X Yosemite 10.10.5El Capitan 10.11.3
[虚拟操作系统] CentOS 6.7
VirtualBox 4.3.14
Fluentd 2.3.0
Elasticsearch
Kibana 4.3.0

宿主操作系统

安装Fluentd

从这里下载。
由于我的客户操作系统是OS X,我下载了.dmg文件,
通过反复点击“下一步”按钮完成安装。

<source>
  type tail
  format apache
  path /Applications/drupal-7.41-1/apache2/logs/access_log
  tag apache.access
</source>

#ファイルへの出力も可
#<match apache.access>
#  type file
#  path /Applications/drupal-7.41-1/apache2/logs/access_log.pos
#</match>

<match apache.access>
  type forward
  <server>
    host 192.168.56.10
  </server>
</match>

关于启动和停止。

# 起動
launchctl load /Library/LaunchDaemons/td-agent.plist
# 停止
launchctl unload /Library/LaunchDaemons/td-agent.plist

客座操作系统

安装Elasticsearch

有关安装,请参考这里。

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please see the documentation for further information on configuration options:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/setup-configuration.html>
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
# cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
# node.name: node-1
#
# Add custom attributes to the node:
#
# node.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
# path.data: /data
#
# Path to log files:
#
path.logs: /var/log/elasticsearch/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
# bootstrap.mlockall: true
#
# Make sure that the `ES_HEAP_SIZE` environment variable is set to about half the memory
# available on the system and that the owner of the process is allowed to use this limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: localhost
#network.host: 192.168.56.10
#
# Set a custom port for HTTP:
#
# http.port: 9200
#
# For more information, see the documentation at:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-network.html>
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
# gateway.recover_after_nodes: 3
#
# For more information, see the documentation at:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-gateway.html>
#
# --------------------------------- Discovery ----------------------------------
#
# Elasticsearch nodes will find each other via unicast, by default.
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
# discovery.zen.ping.unicast.hosts: ["host1", "host2"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of nodes / 2 + 1):
#
# discovery.zen.minimum_master_nodes: 3
#
# For more information, see the documentation at:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-discovery.html>
#
# ---------------------------------- Various -----------------------------------
#
# Disable starting multiple nodes on a single system:
#
# node.max_local_storage_nodes: 1
#
# Require explicit names when deleting indices:
#
# action.destructive_requires_name: true

# enable cross-origin resource sharing
http.cors.enabled: true

启动和状态确认。

systemctl start elasticsearch
systemctl status elasticsearch

安装Fluentd

使用稳定版本。

#yum -y install td-agent
<source>
  type forward
</source>

<match apache.access>
  type elasticsearch
  logstash_format true
  hosts localhost:9200
  type_name application-log
  buffer_type memory
  retry_limit 17
  retry_wait 1.0
  num_threads 1
  flush_interval 60
  retry_limit 17

请参考原网站以了解更多详细设置方法。
由于还提供设置方法的说明,所以非常有参考价值。

使用的是中国的原生语言对这句话进行翻译,只提供一种选项:Kibana的安装。

有关安装的详细步骤,请参考这里。

启动和状态确认。

systemctl start td-agent
systemctl status td-agent

确认动作人

1. 访问在开头提到的系统(http://localhost/drupal)。
2. 访问http://[主机操作系统IP]:5601,并确认Kibana的显示。

Capture 2016-04-11 0.14.05.png

首先,我试着触摸了一下,没有查阅说明书或其他资料。它相当直观,操作起来很容易。

Capture 2016-04-11 0.27.33.png

最后

可以将日志转发并存储、可视化。
我已经在这里发布了关于使用Cacti监视我们创建的客户操作系统的方法,
一起阅读会很有帮助。

广告
将在 10 秒后关闭
bannerAds