Elasticsearch,Fluentd,Kibana
Elastic Cloud on Kubernetes (ECK)
- EFK Versions:
- Elasticsearch v6.5.4
- Kibana v6.5.4 (server base path)
- Fluentd v1.3.2 (td-agent v3.5.0)
- Fluentd is all the rage in container environments but not much else. Which is kind of what it's become niche for --- Fluentd在容器环境中非常流行,但在其他环境中并不多见。这就是它的定位
- Fluentd is a popular open-source data collector that we’ll set up on our Kubernetes nodes to tail container log files, filter and transform the log data, and deliver it to the Elasticsearch cluster, where it will be indexed and stored --- Fluentd是一个流行的开源数据收集器,我们将在kubernetes节点上设置它来跟踪容器日志文件,过滤和转换日志数据,并将其传递到elasticsearch集群,在那里对其进行索引和存储。In addition to container logs, the Fluentd agent will tail Kubernetes system component logs like kubelet, kube-proxy, and Docker logs
- ECK: 动态调整本地存储的规模(包括 Elastic Local Volume(一款本地存储驱动器)); 您如何提供可动态缩放的持久型本地存储?我们直接在 ECK 内构建了 Elastic Local Volume,这是一个适用于 Kubernetes 的集成式存储驱动器。
- TLS Certificates for the transport layer used for Elasticsearch internal communication between Elasticsearch nodes in a cluster are managed by ECK and are not configurable.
ECK Flags:
- xpack.ml.enabled: Set to
true
(default) to enable machine learning on the node.
- xpack.ml.enabled: Set to
- NodeSet: A set of Elasticsearch nodes that share the same Elasticsearch configuration and a Kubernetes Pod template. Multiple NodeSets can be defined in the Elasticsearch CRD to achieve a cluster topology consisting of groups of Elasticsearch nodes with different node roles, resource requirements and hardware configurations (Kubernetes node constraints).tests
Kibana search example: kubernetes.namespace_name : qa AND kubernetes.labels.app : authmgr
kubernetes.labels.app : authmgr/redis/addressbook/admingui/chatmgr/confmgr/coreadaptor/gateway/idpmgr/storemgr/transproxy/uosdatafluentd-elasticsearch配置目录:/etc/fluent/config.d
type record_reformer
pod_name ${tag_suffix[5].split('_')[0]}namespace_name ${tag_suffix[5].split('_')[1].split('.')[0]}
container_name ${tag_suffix[6].split('.')[0]}
container_id ${tag_suffix[7].split('.')[0]}
logdir ${tag_suffix[9]}
logstash_prefix logstash-${record['kubernetes']['namespace_name']}Clean Up the Logs:
<match kubernetes.var.log.containers.**fluentd**.log>@type null
</match>
<match kubernetes.var.log.containers.**kube-system**.log>
@type null
</match>
<match kubernetes.**>
@type stdout
</match>
ff
ElastAlert<线上服务的日志报警功能>
- Elasticsearch is periodically queried and the data is passed to the rule type, which determines when a match is found. When a match occurs, it is given to one or more alerts, which take action based on the match. (定期查询Elasticsearch,并将数据传递到规则类型,该规则类型确定何时找到匹配项。 发生匹配时,将为该警报提供一个或多个警报,这些警报将根据匹配采取行动。)
- This is configured by a set of rules, each of which defines a query, a rule type, and a set of alerts.
- ElastAlert Save Information and meta data about it queries and alert back in elastic search (for auditing,debugging and too elastalert to resume where exactly it left of from previous state).
- ElastAlert for first time and Configuration — Part 2
elastalert_status is a log of the queries performed for a given rule <elastalert_status是针对给定规则执行的查询的日志>
Fluentd<buffer section>
Fluentd regular expression editor
How to match logs from specific containers?
- 使用Fluentd收集Docker容器日志 (save to local file/fluentd 如何得到container log)
- Fluentd从tag字段分割提取新字段
- Application Logging in Kubernetes with fluentd
- Fluentd: add log path to the record
- Kubernetes Fluentd Config Example
- remove a nested key with fluentd
- Standard way for nested record support
fluentd-logging-kubernetes.yaml
Issues on Fluentd
Fluent::Plugin::ElasticsearchErrorHandler::ElasticsearchError error="400 - Rejected by Elasticsearch
This error is caused by Elasticsearch not ES plugin (log_es_400_reason)Elasticsearch timestamp parse error, _using date forma_t: "2019-10-25T10:06:46.132Z"
Logrotate
logrotate parameters:
- rotate 4 : makes sure that four old versions of the file are saved. Rotates a given log four times before deleting it
- create: The old file is saved under a new name and a new file is create
- compress: compress causes logrotate to compress log files to save space. This is done using gzip by default, but you can specify another program; The compression command can be changed using the compresscmd option
- missingok: Don’t raise an error if the log is missing
- delaycompress/nodelaycompress: Don’t compress the file until it has already been rotated. This is to prevent corruption if the daemon doesn’t close the log file immediately. ---The delaycompress directive above postpones the compression one rotation cycle.
- notifyempty: Don't rotate the log file when it is empty
- size: logs files that grow bigger than the size specified here
- weekly: rotates the log files once a week
9. create 640 root adm: this will create new log files with set permissions, owner and group - copytruncate : that truncates the old log file in place after creating a copy, instead of moving the old file and creating a new one.
Execute a cron job every 5 Minutes : */5 * * * * /home/ramesh/backup.sh
Execute a cron job every 5 Hours : 0 */5 * * * /home/ramesh/backup.sh
cat /var/lib/logrotate/status /
crontab -l
: List the all your cron jobs./var/lib/logrotate.status
References
- Installing Elasticsearch on Kubernetes Using Operator and setting it for Kubernetes logging
- How To Set Up an Elasticsearch, Fluentd and Kibana (EFK) Logging Stack on Kubernetes (Great)
- 在 Kubernetes 上运行 Elasticsearch:开启新篇章
logrotate by size - do I need to change the cron?
- HowTo: The Ultimate Logrotate Command Tutorial with 10 Examples