Logstash configuration update - filters

Reviewing how we deal with filters in our (mostly) Windows environment into Logstash...inputs were already discussed, outputs will be discussed in another post.  I'll be sanitizing the info as best I can.

Filters

So our config file structure looks like this for filters:
  • 01-filters_start.conf (how I keep the brackets organized)
  • 02-filters_drop.conf (drop stuff first)
  • 10-filters_grok.conf (fixing grokparsefailures to start)
  • 15-filters_mutate.conf
  • 20-filters_tagging.conf
  • 30-filters_metrics.conf
  • 89-filters_end.conf (bracket organization)
Let's go through them...

01-filters_start

filter {
Yep, that's it.  Just organization.

02-filters_drop

# drop esx syslog because vmware bugs
#if [message] =~ /esx1/ {
#       drop { }
#}
This was used because the ESX cluster continued dumping syslog messages way after I disabled sending for some reason, and it gave me a chance to see what a drop filter looks like.  Left in because maybe someone will find this handy.

10-filters_grok

# The purpose of this filter is to remove our grokparsefailures on the Netscaler logs
if [type] == "syslog" {
        grok {
                break_on_match => false
                match => [
                        "message", "%{NETSCALER1}",
                        "message", "%{NETSCALER2}",
                        "message", "%{NETSCALER3}"
                ]
                add_tag => [ "ns_tag" ]
                remove_tag => [ "_grokparsefailure" ]
                tag_on_failure => []
        }
}
This one is interesting...we get a lot of firewall & load balancer traffic, and while we can still use them, it's not as easy to use because everything has to still be grep'd from inside the message field.  One of my co-workers wrote some custom grok patterns to parse out the Netscaler messages - I'll see if he's okay with publishing them.

These were placed into the /opt/logstash/patterns directory, and you just reference them like above.  Pretty cool to see it in action!

15-filters_mutate

# Mutate filters go in here.
# Clean up the IIS log fields, add a new type
if [SourceName] == "IIS" {
        mutate {
                replace => [ "type", "IISlog", "_type", "IISlog" ]
                rename => [ "s-computername", "hostname" ]
                remove_field => [ "Message" ]
        }
}
# Strip out domain name from 'hostname'
if [type] == "eventlog" {
        mutate {
                gsub => [
                        "Hostname", ".domain.com$", "",
                        "Hostname", ".domain.local$", ""
                ]
        }
}
Here we wanted to standardize fields (still some work to do on this) so that working with IIS logs and Event Logs would be a little more streamlined.  The regex's in the eventlog section basically remove the domain.com stuff, leaving us with just the hostname.

20-filters_tagging

I have to heavily redact this section, so use your imagination.

Event Log tagging for Nagios

# All tagging filters go in here
# Don't forget to set the 'tag_on_failure' or else _grokparsefailures will abound!
#
# BizTalk Error Tagging & Alerting #
#
if [type] == "eventlog" {
        if [SourceName] == "BizTalk Server" {
                        grok {
                                match => ["SeverityValue", "[4]"]
                                add_field => {
                                                "nagios_service" => "BizTalk-ServerError"
                                                "nagios_status" => "2"
                                }
                                add_tag => [ "nagios_check_eventlog_biztalk" ]
                                tag_on_failure => []
                        }
                        grok {
                                        match => ["SeverityValue", "[3]"]
                                        add_field => {
                                                        "nagios_service" => "BizTalk-ServerWarning"
                                                        "nagios_status" => "1"
                                        }
                                        add_tag => [ "nagios_check_eventlog_biztalk" ]
                                        tag_on_failure => []
                        }
        } # End sourceName=BiztalkServer
} # end type=eventlog
What we're doing here is taking any BizTalk Server event matching severity 3 or 4 (warn/err) and adding appropriate (custom) Nagios tags - more on these in the output discussion.  The tags comprise of the Nagios service name and Nagios status code.

You'll see 'tag_on_failure => []' everywhere - I never did find out why omitting this causes everything to grokparsefailure.

IIS log tagging for Nagios

# For IIS events: Nagios service & status field addition
if [type] == "IISlog" {
        if [cs-host] =~ /^REGEX TO MATCH SOME HOSTNAMES$/ {
        # This block deals with cs-host being an IP, load balancer, or something else, i.e. requires URI-stem query
                grok { # Add the blah
                        match => [ "cs-uri-stem", "^/blah/" ]
                        add_field => [ "nagios_service", "Blah" ]
                        tag_on_failure => []
                }
<SNIPPITY SNIP SNIP - a lot of these to match specific instances>
        } else {
        # This block deals with cs-host being an FQDN
                grok { # Add the SuperSite Nagios service field/value
                        match => [ "cs-host", "^(uat)?supersite.domain.com$" ]
                        add_field => [ "nagios_service", "ResponseDriver" ]
                        tag_on_failure => []
                }
                <SNIPPITY SNIP SNIP - a lot of these to match specific instances>
        }
        # Here we will match IISlog entry "sc-status" codes
        # and assign Nagios service values accordingly
        grok { # 200 or 300 codes give us OK
                match => [ "sc-status", "[2,3]\d\d" ]
                add_field => [ "nagios_status", "0" ]
                add_tag => [ "nagios_check_iis" ]
                tag_on_failure => []
        }
        grok { # 400 codes give us WARN
                match => [ "sc-status", "[4]\d\d" ]
                add_field => [ "nagios_status", "1" ]
                add_tag => [ "nagios_check_iis" ]
                tag_on_failure => []
        }
        grok { # 500 codes give us CRIT
                match => [ "sc-status", "[5]\d\d" ]
                add_field => [ "nagios_status", "2" ]
                add_tag => [ "nagios_check_iis" ]
                tag_on_failure => []
        }
} # endif type=iislog
A lot to look at here!!  But pretty much it's the same thing as for Event Log Nagios tagging...EXCEPT!!  One of my co-workers is too smart for his own good, and came up with this really sweet event flow into Nagios monitors...

  1. IIS logs are tagged based on status code
  2. 200/300 codes = Nagios OK
  3. 400 codes = Nagios WARN
  4. 500 codes = Nagios CRIT
Since the logs are continuously flowing, the service status is being continually updated.  Means you need to review the event history, and rely on notifications, but for us we mostly use this as 'additional information'.  Eventually could be used for notification, I suppose.


30-filters_metrics

# Special metrics filters in here
Yeeaah.  Placeholder?  This will probably come up soon, but not sure yet.


89-filters_end

} #filter
Yep.  Just organization.

Comments

Popular posts from this blog

Learning through failure - a keyboard creation journey

Learning Opportunities - Watching/listening list

DFSR - eventid 4312 - replication just won't work