logstash常见的几种过滤器

logstash简介

logstash项目是2009年8月2日诞生,作者是著名运维工程师乔丹西塞,后被Elasticsearch公司收购。
logstash 是一个具有实时渠道的数据收集引擎。使用JRuby 语言编写。
logstash提供很多功能强大的滤网以满足各种应用场景

logstash过滤器插件filter详解及实例

1.1、grok正则捕获

grok是一个十分强大的logstash filter插件,他可以通过正则解析任意文本,将非结构化日志数据弄成结构化和方便查询的结构。他是目前logstash 中解析非结构化日志数据最好的方式
grok的语法规则是

%{语法:语义}

“语法”指的是匹配的模式。例如使用NUMBER模式可以匹配出数字,IP模式则会匹配出127.0.0.1这样的IP地址。

172.16.213.132 [07/Feb/2018:16:24:19 +0800] "GET /HTTP/1.1" 403 5039

**1过滤IP **

input {
        stdin {
filter{
        grok{
                match => {"message" => "%{IPV4:ip}"}
output {
                stdout {

启动logstash可以得到

172.16.213.132 [07/Feb/2018:16:24:19 +0800]"GET /HTTP/1.1" 403 5039 #手动输入此行信息
       "message" => "172.16.213.132 [07/Feb/2018:16:24:19 +0800]\"GET /HTTP/1.1\" 403 5039",
            "ip" => "172.16.213.132",
      "@version" => "1",
          "host" => "ip-172-31-22-29.ec2.internal",
    "@timestamp" => 2019-01-22T09:48:15.354Z

结果中有IP
2)举个例子来讲解过滤时间戳

filter{
        grok{
                match => {"message" => "%{IPV4:ip}\ \[%{HTTPDATE:timestamp}\]"}
172.16.213.132 [07/Feb/2018:16:24:19 +0800]"GET /HTTP/1.1" 403 5039 手动输入此行信息
      "@version" => "1",
     "timestamp" => "07/Feb/2018:16:24:19 +0800",
    "@timestamp" => 2019-01-22T10:16:14.205Z,
       "message" => "172.16.213.132 [07/Feb/2018:16:24:19 +0800]\"GET /HTTP/1.1\" 403 5039",
            "ip" => "172.16.213.132",
          "host" => "ip-172-31-22-29.ec2.internal"

3)过滤出报文头信息

filter{
        grok{
                match => {"message" => "\ %{QS:referrer}\ "}
172.16.213.132 - - [07/Feb/2018:16:24:19 +0800] "GET /HTTP/1.1" 403 5039
    "@timestamp" => 2019-01-22T10:47:37.127Z,
       "message" => "172.16.213.132 - - [07/Feb/2018:16:24:19 +0800] \"GET /HTTP/1.1\" 403 5039",
      "@version" => "1",
          "host" => "ip-172-31-22-29.ec2.internal",
      "referrer" => "\"GET /HTTP/1.1\""

1.2、date插件

在上面我们有个例子是讲解timestamp字段,表示取出日志中的时间。但是在显示的时候除了显示你指定的timestamp外,还有一行是@timestamp信息,这两个时间是不一样的,@timestamp表示系统当前时间。两个时间并不是一回事,在ELK的日志处理系统中,@timestamp字段会被elasticsearch用到,用来标注日志的生产时间,如此一来,日志生成时间就会发生混乱,要解决这个问题,需要用到另一个插件,即date插件,这个时间插件用来转换日志记录中的时间字符串,变成Logstash::Timestamp对象,然后转存到@timestamp字段里面

filter{
        grok{
                match => {"message" => "\ -\ -\ \[%{HTTPDATE:timestamp}\]"}
        date{
                match => ["timestamp","dd/MMM/yyyy:HH:mm:ss Z"]

注意:时区偏移量需要用一个字母Z来转换。还有这里的“dd/MMM/yyyy”,你发现中间是三个大写的M,没错,这里确实是三个大写的M,我尝试只写两个M的话,转换失败

172.16.213.132 - - [07/Feb/2018:16:24:19 +0800] "GET /HTTP/1.1" 403 5039  #手动输入此行信息
          "host" => "ip-172-31-22-29.ec2.internal",
     "timestamp" => "07/Feb/2018:16:24:19 +0800",
    "@timestamp" => 2018-02-07T08:24:19.000Z,
       "message" => "172.16.213.132 - - [07/Feb/2018:16:24:19 +0800] \"GET /HTTP/1.1\" 403 5039",
      "@version" => "1"

1.3、数据修改mutate插件

mutate插件是logstash另一个非常重要的插件,它提供了丰富的基础类型数据处理能力,包括重命名、删除、替换、修改日志事件中的字段。我们这里举几个常用的mutate插件:字段类型转换功能covert、正则表达式替换字段功能gsub、分隔符分隔字符串为数值功能split、重命名字段功能rename、删除字段功能remove_field
1) remove_field的用法
remove_field的用法也是很常见的,他的作用就是去重,在前面的例子中你也看到了,不管是我们要输出什么样子的信息,都是有两份数据,即message里面是一份,HTTPDATE或者IP里面也有一份,这样子就造成了重复,过滤的目的就是筛选出有用的信息,重复的不要,因此我们看看如何去重呢?
我们还是以输出IP为例:

filter{
        grok{
                match => {"message" => "%{IP:ip_address}"}
                remove_field => ["message"]
172.16.213.132 - - [07/Feb/2018:16:24:19 +0800] "GET /HTTP/1.1" 403 5039 #手动输入此行内容并按enter键
    "ip_address" => "172.16.213.132",
          "host" => "ip-172-31-22-29.ec2.internal",
      "@version" => "1",
    "@timestamp" => 2019-01-22T12:16:58.918Z

这时候你会发现没有之前显示的那个message的那一行信息了。因为我们使用remove_field把他移除了,这样的好处显而易见,我们只需要日志中特定的信息而已。

  • 字段类型转换convert
  • filter{
            grok{
                    match => {"message" => "%{IPV4:ip}"}
                    remove_field => ["message"]
            mutate{
                    convert => ["ip","string"]
    
    172.16.213.132 - - [07/Feb/2018:16:24:9 +0800] "GET /HTTP/1.1" 403 5039
        "@timestamp" => 2019-01-23T04:13:55.261Z,
                "ip" => "172.16.213.132",
              "host" => "ip-172-31-22-29.ec2.internal",
          "@version" => "1"
    

    3)正则表达式替换匹配字段
    gsub可以通过正则表达式替换字段中匹配到的值,但是这本身只对字符串字段有效。

    filter{
            grok{
                    match => {"message" => "%{QS:referrer}"}
                    remove_field => ["message"]
            mutate{
                    gsub => ["referrer","/","-"]
         "host" => "ip-172-31-22-29.ec2.internal",
         "@timestamp" => 2019-01-23T05:51:30.786Z,
         "@version" => "1",
         "referrer" => "\"GET -HTTP-1.1\""
    

    4)分隔符分隔字符串为数组
    split可以通过指定的分隔符分隔字段中的字符串为数组

    filter{
            mutate{
                    split => ["message","-"]
                    add_field => ["A is lower case :","%{[message][0]}"]
    
    a-b-c-d-e-f-g            #手动输入此行内容,并按下enter键。
            [0] "a",
            [1] "b",
            [2] "c",
            [3] "d",
            [4] "e",
            [5] "f",
            [6] "g"
         "host" => "ip-172-31-22-29.ec2.internal",
         "@version" => "1",
         "@timestamp" => 2019-01-23T06:07:18.062Z
    

    5)重命名字段
    rename可以实现重命名某个字段的功能。

    filter{
            grok{
                    match => {"message" => "%{IPV4:ip}"}
                    remove_field => ["message"]
            mutate{
                    convert => {
                            "ip" => "string"
                    rename => {
                            "ip"=>"IP"
    

    rename字段使用大括号{}括起来,其实我们也可以使用中括号达到同样的目的

    mutate{
                    convert => {
                            "ip" => "string"
                    rename => ["ip","IP"]
    
    172.16.213.132 - - [07/Feb/2018:16:24:9 +0800] "GET /HTTP/1.1" 403 5039      #手动输入此内容
          "@version" => "1",
        "@timestamp" => 2019-01-23T06:20:21.423Z,
              "host" => "ip-172-31-22-29.ec2.internal",
                "IP" => "172.16.213.132"
    

    6)添加字段add_field

    添加字段多用于split分隔中,主要是对split分隔后的字段中指定格式输出。

    filter {
      mutate {
        split => ["message", "|"]
          add_field => {
            "timestamp" => "%{[message][0]}"
    

    添加字段后,该字段会与@timestamp一样同等格式显示出来。

    1.4、geoip地址查询归类

    geoip是常见的免费的IP地址归类查询库,geoip可以根据IP地址提供对应的地域信息,包括国别,省市,经纬度等等,此插件对于可视化地图和区域统计非常有用。

    filter{
            grok {
                    match => {
                            "message" => "%{IP:ip}"
                    remove_field => ["message"]
            geoip {
                    source => "ip"
    
    114.55.68.111 - - [07/Feb/2018:16:24:9 +0800] "GET /HTTP/1.1" 403 5039      #手动输入此行信息
                "ip" => "114.55.68.111",
             "geoip" => {
                 "city_name" => "Hangzhou",
               "region_code" => "33",
                  "location" => {
                "lat" => 30.2936,
                "lon" => 120.1614
                 "longitude" => 120.1614,
                  "latitude" => 30.2936,
             "country_code2" => "CN",
                  "timezone" => "Asia/Shanghai",
                        "ip" => "114.55.68.111",
             "country_code3" => "CN",
            "continent_code" => "AS",
              "country_name" => "China",
               "region_name" => "Zhejiang"
              "host" => "ip-172-31-22-29.ec2.internal",
          "@version" => "1",
        "@timestamp" => 2019-01-23T06:47:51.200Z
    

    但是上面的内容并不是每个都是我们想要的,因此我们可以选择性的输出。

    继续修改内容如下:

    filter{
            grok {
                    match => ["message","%{IP:ip}"]
                    remove_field => ["message"]
            geoip {
                    source => ["ip"]
                    target => ["geoip"]
                    fields => ["city_name","region_name","country_name","ip"]
    
    filter{
            grok {
                    match => ["message","%{IP:ip}"]
                    remove_field => ["message"]
            geoip {
                    source => ["ip"]
                    target => ["geoip"]
                    fields => ["city_name","region_name","country_name","ip"]
    

    1.5、filter插件综合应用。

    我们的业务例子如下所示:

    112.195.209.90 - - [20/Feb/2018:12:12:14 +0800] "GET / HTTP/1.1" 200 190 "-" "Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Mobile Safari/537.36" "-"
    

    日志中的双引号、单引号、中括号等不能被正则解析的都要加上转义符号,详情可见这里:https://www.cnblogs.com/ysk123/p/9858387.html

    现在我们修改配置文件进行匹配

    filter{
            grok {
                    match => ["message","%{IPORHOST:client_ip}\ -\ -\ \[%{HTTPDATE:timestamp}\]\ %{QS:referrer}\ %{NUMBER:status}\ %{NUMBER:bytes}\ \"-\"\ \"%{DATA:browser_info}\ %{GREEDYDATA:extra_info}\"\ \"-\""]
            geoip {
                    source => ["client_ip"]
                    target => ["geoip"]
                    fields => ["city_name","region_name","country_name","ip"]
            date {
                    match => ["timestamp","dd/MMM/yyyy:HH:mm:ss Z"]
            mutate {
                    remove_field => ["message","timestamp"]
    

    然后启动一下看看效果:

    "referrer" => "\"GET / HTTP/1.1\"", "bytes" => "190", "client_ip" => "112.195.209.90", "@timestamp" => 2018-02-20T04:12:14.000Z, "browser_info" => "Mozilla/5.0", "extra_info" => "(Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Mobile Safari/537.36", "status" => "200", "host" => "ip-172-31-22-29.ec2.internal", "@version" => "1", "geoip" => { "city_name" => "Chengdu", "region_name" => "Sichuan", "country_name" => "China", "ip" => "112.195.209.90"