1.Elk Stack简介

Filebeat: 一个轻量级开源日志文件数据搜集器,基于 Logstash-Forwarder 源代码开发,是对它的替代。在需要采集日志数据的 server 上安装 Filebeat,并指定日志目录或日志文件后,Filebeat 就能读取数据,迅速发送到 Logstash 进行解析,亦或直接发送到 Elasticsearch 进行集中式存储和分析 Logstash: 数据收集引擎。它支持动态的从各种数据源搜集数据,并对数据进行过滤、分析、丰富、统一格式等操作,然后存储到用户指定的位置 Elasticsearch: 分布式搜索和分析引擎,具有高可伸缩、高可靠和易管理等特点。基于 Apache Lucene 构建,能对大容量的数据进行接近实时的存储、搜索和分析操作。通常被用作某些应用的基础搜索引擎,使其具有复杂的搜索功能 Kibana: 数据分析和可视化平台。通常与 Elasticsearch 配合使用,对其中数据进行搜索、分析和以统计图表的方式展示

2.安装

1.filebeat安装	
    wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.4.0-linux-x86_64.tar.gz
    tar -xzf filebeat-6.4.0-linux-x86_64.tar.gz
    官方文档
        https://www.elastic.co/guide/en/beats/filebeat/6.4/filebeat-getting-started.html
 2.logstash安装
    wget https://artifacts.elastic.co/downloads/logstash/logstash-5.2.2.tar.gz
    tar -xzf logstash-5.2.2.tar.gz
    官方文档
        https://www.elastic.co/guide/en/logstash/5.2/index.html
 3.elasticsearch安装
    wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.2.2.zip
    unzip elasticsearch-5.2.2.zip
    官方文档
        https://www.elastic.co/guide/en/elasticsearch/reference/5.2/index.html
4.kibana安装
    wget https://artifacts.elastic.co/downloads/kibana/kibana-5.2.2-linux-x86_64.tar.gz
    tar -xzf kibana-5.2.2-linux-x86_64.tar.gz
    官方文档
        https://www.elastic.co/guide/en/kibana/5.2/index.html

3.配置

1. filebeat

```shell
# filebeat.yml
filebeat.inputs:
    
    # Each - is an input. Most options can be set at the input level, so
    # you can use different inputs for various configurations.
    # Below are the input specific configurations.
    
    - type: log
    
       # Change to true to enable this input configuration.
       enabled: true
    
       # Paths that should be crawled and fetched. Glob based paths.
       paths:
         # - /var/log/*.log 日志文件路径
         - /tmp/ccapi.log   
         #- c:\programdata\elasticsearch\logs\*
    
       document_type: ccapi 
       tags: ["ccapi"]         # 增加tags
	   
       ### Multiline options
       multiline.pattern: ^\[  # 多行异常错误堆栈文本处理
       multiline.negate: true
       multiline.match: after
	   
       # Optional additional fields. These fields can be freely picked
       # to add additional information to the crawled log files for filtering
       fields:
           type: "ccapi"      # fields下面新增type字段, 用于区分业务 
		   
       #==================== Elasticsearch template setting ==========================
       setup.template.settings:
           index.number_of_shards: 3
       setup.template.name: "logstash"  # 设置es索引的名称, 默认为filebeat
       setup.template.pattern: "logstash-*" 
       setup.template.json.enabled: true # 加载json的模版配置, 这里使用Logstash默认生成的es模板
       setup.template.json.path: "/home/phenix/logstash/logstash.template.json" # json的模版配置路径
	   
       #============================== Kibana =====================================
       setup.kibana:
            host: "localhost:5601"
       #----------------------------- Logstash output --------------------------------
       output.logstash:
            hosts: ["localhost:5044"]
```

2. logstash

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
 input {
       beats {
           port => 5044
     }
   }

   filter {

       if [fields][type] == "ccapi" { # 如果业务是ccapi相关
           json {                     # 因为日志已经json化,且以字符串的形式,存在message字段
                source => "message"   # json处理message字段
                target => "msg"       # 将json完的数据, 存在msg字段下(避免如果
                                      # json内容中本身也含有type,host,path字段,
                                      # 那么解析后将覆盖掉logstash默认的这三个字段)
                remove_field => ["message"] # 将message字段移除
           }
      }
   }


   output {
     stdout { codec => rubydebug }

     if [fields][type] == "ccapi" {  # 如果业务是ccapi
           elasticsearch {
               hosts => ["192.168.1.33:9200"]
               index => "logstash-ccapi-%{+YYYY.MM.dd}"# 指定索引, 业务+日期切片
               document_type => "ccapi"  # 指定索引下的类型
               flush_size => 2000    # 攒到2000条数据后发送
               idle_flush_time => 10 # 每10s发送一次
               sniffing => true
               template_overwrite => true
          }
     }
	   }

运行结果

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
	 {"msg": {
		         "msg": "Request",
		         "respContentType": "application/json",
		         "reqUri": "/api/room/create?",
		         "reqMethod": "POST",
		         "respTimeMs": 23,
		         "logSource": "web",
		         "ascTime": "2019-06-04T05:54:58.623Z",
		         "type": "request",
		         "userId": null,
		         "respStat": 200,
		         "token": null,
		         "reqId": "49136896868d11e98923b083febfc53a",
		         "respSizeB": 239,
		         "reqData": null,
		         "service": "ccapi",
		         "clientIp": "192.168.1.73",
		         "serverIp": "localhost",
		         "device": "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)"
		       },
		       "input": {
		         "type": "log"
		       },
		       "@timestamp": "2019-06-04T05:54:55.678Z",
		       "offset": 2720807,
		       "@version": "1",
		       "host": {
		         "name": "localhost"
		       },
		       "beat": {
		         "hostname": "localhost",
		         "name": "localhost",
		         "version": "6.4.0"
		       },
		       "prospector": {
		         "type": "log"
		       },
		       "source": "/tmp/ccapi.log",
		       "fields": {
		         "type": "ccapi"
		       },
		       "tags": [
		         "ccapi",
		         "beats_input_codec_plain_applied"
		       ]}

3. elasticsearch

1
2
3
# elasticsearch.yml
network.host: 0.0.0.0
http.port: 9200

4.kibana

1
2
3
# kibana.yml
server.host: 0.0.0.0
elasticsearch.url: "http://192.168.1.33:9200"

4.其余

1.手动导出logstash默认模版

https://github.com/logstash-plugins/logstash-output-elasticsearch/blob/master/lib/logstash/outputs/elasticsearch/elasticsearch-template-es5x.json

2.手动导入模版到es

curl -XPUT 'http://localhost:9200/_template/logstash' -d@模版路径

3.查看es索引

curl 'localhost:9200/_cat/indices?v'

4.查看某个索引下的数据

curl 'localhost:9200/索引名/类型/_search?'

5.删除某个索引

curl -XDELETE 'localhost:9200/索引名'

5.参考

ES学习 https://fuxiaopang.gitbooks.io/learnelasticsearch/

ES模版设置 https://birdben.github.io/2016/12/22/Logstash/Logstash%E5%AD%A6%E4%B9%A0%EF%BC%88%E5%85%AD%EF%BC%89elasticsearch%E6%8F%92%E4%BB%B6%E2%80%94%E2%80%94%E8%AE%BE%E7%BD%AEES%E7%9A%84Template/ https://www.jianshu.com/p/1f67e4436c37

logstash字段详解 http://www.51niux.com/?id=205

ELK架构演进 https://github.com/jasonGeng88/blog/blob/master/201703/logstash_deploye_scale.md

ELK/kibana表盘设置 http://docs.flycloud.me/docs/ELKStack/logstash/get-start/index.html https://github.com/jasonGeng88/blog/blob/master/201703/elk_parse_log.md https://www.ibm.com/developerworks/cn/opensource/os-cn-elk-filebeat/index.html