Skip to main content

Logstash configuration update - filters

Reviewing how we deal with filters in our (mostly) Windows environment into Logstash...inputs were already discussed, outputs will be discussed in another post.  I'll be sanitizing the info as best I can.

Filters

So our config file structure looks like this for filters:
  • 01-filters_start.conf (how I keep the brackets organized)
  • 02-filters_drop.conf (drop stuff first)
  • 10-filters_grok.conf (fixing grokparsefailures to start)
  • 15-filters_mutate.conf
  • 20-filters_tagging.conf
  • 30-filters_metrics.conf
  • 89-filters_end.conf (bracket organization)
Let's go through them...

01-filters_start

filter {
Yep, that's it.  Just organization.

02-filters_drop

# drop esx syslog because vmware bugs
#if [message] =~ /esx1/ {
#       drop { }
#}
This was used because the ESX cluster continued dumping syslog messages way after I disabled sending for some reason, and it gave me a chance to see what a drop filter looks like.  Left in because maybe someone will find this handy.

10-filters_grok

# The purpose of this filter is to remove our grokparsefailures on the Netscaler logs
if [type] == "syslog" {
        grok {
                break_on_match => false
                match => [
                        "message", "%{NETSCALER1}",
                        "message", "%{NETSCALER2}",
                        "message", "%{NETSCALER3}"
                ]
                add_tag => [ "ns_tag" ]
                remove_tag => [ "_grokparsefailure" ]
                tag_on_failure => []
        }
}
This one is interesting...we get a lot of firewall & load balancer traffic, and while we can still use them, it's not as easy to use because everything has to still be grep'd from inside the message field.  One of my co-workers wrote some custom grok patterns to parse out the Netscaler messages - I'll see if he's okay with publishing them.

These were placed into the /opt/logstash/patterns directory, and you just reference them like above.  Pretty cool to see it in action!

15-filters_mutate

# Mutate filters go in here.
# Clean up the IIS log fields, add a new type
if [SourceName] == "IIS" {
        mutate {
                replace => [ "type", "IISlog", "_type", "IISlog" ]
                rename => [ "s-computername", "hostname" ]
                remove_field => [ "Message" ]
        }
}
# Strip out domain name from 'hostname'
if [type] == "eventlog" {
        mutate {
                gsub => [
                        "Hostname", ".domain.com$", "",
                        "Hostname", ".domain.local$", ""
                ]
        }
}
Here we wanted to standardize fields (still some work to do on this) so that working with IIS logs and Event Logs would be a little more streamlined.  The regex's in the eventlog section basically remove the domain.com stuff, leaving us with just the hostname.

20-filters_tagging

I have to heavily redact this section, so use your imagination.

Event Log tagging for Nagios

# All tagging filters go in here
# Don't forget to set the 'tag_on_failure' or else _grokparsefailures will abound!
#
# BizTalk Error Tagging & Alerting #
#
if [type] == "eventlog" {
        if [SourceName] == "BizTalk Server" {
                        grok {
                                match => ["SeverityValue", "[4]"]
                                add_field => {
                                                "nagios_service" => "BizTalk-ServerError"
                                                "nagios_status" => "2"
                                }
                                add_tag => [ "nagios_check_eventlog_biztalk" ]
                                tag_on_failure => []
                        }
                        grok {
                                        match => ["SeverityValue", "[3]"]
                                        add_field => {
                                                        "nagios_service" => "BizTalk-ServerWarning"
                                                        "nagios_status" => "1"
                                        }
                                        add_tag => [ "nagios_check_eventlog_biztalk" ]
                                        tag_on_failure => []
                        }
        } # End sourceName=BiztalkServer
} # end type=eventlog
What we're doing here is taking any BizTalk Server event matching severity 3 or 4 (warn/err) and adding appropriate (custom) Nagios tags - more on these in the output discussion.  The tags comprise of the Nagios service name and Nagios status code.

You'll see 'tag_on_failure => []' everywhere - I never did find out why omitting this causes everything to grokparsefailure.

IIS log tagging for Nagios

# For IIS events: Nagios service & status field addition
if [type] == "IISlog" {
        if [cs-host] =~ /^REGEX TO MATCH SOME HOSTNAMES$/ {
        # This block deals with cs-host being an IP, load balancer, or something else, i.e. requires URI-stem query
                grok { # Add the blah
                        match => [ "cs-uri-stem", "^/blah/" ]
                        add_field => [ "nagios_service", "Blah" ]
                        tag_on_failure => []
                }
<SNIPPITY SNIP SNIP - a lot of these to match specific instances>
        } else {
        # This block deals with cs-host being an FQDN
                grok { # Add the SuperSite Nagios service field/value
                        match => [ "cs-host", "^(uat)?supersite.domain.com$" ]
                        add_field => [ "nagios_service", "ResponseDriver" ]
                        tag_on_failure => []
                }
                <SNIPPITY SNIP SNIP - a lot of these to match specific instances>
        }
        # Here we will match IISlog entry "sc-status" codes
        # and assign Nagios service values accordingly
        grok { # 200 or 300 codes give us OK
                match => [ "sc-status", "[2,3]\d\d" ]
                add_field => [ "nagios_status", "0" ]
                add_tag => [ "nagios_check_iis" ]
                tag_on_failure => []
        }
        grok { # 400 codes give us WARN
                match => [ "sc-status", "[4]\d\d" ]
                add_field => [ "nagios_status", "1" ]
                add_tag => [ "nagios_check_iis" ]
                tag_on_failure => []
        }
        grok { # 500 codes give us CRIT
                match => [ "sc-status", "[5]\d\d" ]
                add_field => [ "nagios_status", "2" ]
                add_tag => [ "nagios_check_iis" ]
                tag_on_failure => []
        }
} # endif type=iislog
A lot to look at here!!  But pretty much it's the same thing as for Event Log Nagios tagging...EXCEPT!!  One of my co-workers is too smart for his own good, and came up with this really sweet event flow into Nagios monitors...

  1. IIS logs are tagged based on status code
  2. 200/300 codes = Nagios OK
  3. 400 codes = Nagios WARN
  4. 500 codes = Nagios CRIT
Since the logs are continuously flowing, the service status is being continually updated.  Means you need to review the event history, and rely on notifications, but for us we mostly use this as 'additional information'.  Eventually could be used for notification, I suppose.


30-filters_metrics

# Special metrics filters in here
Yeeaah.  Placeholder?  This will probably come up soon, but not sure yet.


89-filters_end

} #filter
Yep.  Just organization.

Comments

Popular posts from this blog

DFSR - eventid 4312 - replication just won't work

This warning isn't documented that well on the googles, so here's some google fodder:


You are trying to set up replication for a DFS folder (no existing replication)Source server is 2008R2, 'branch office' server is 2012R2 (I'm moving all our infra to 2012R2)You have no issues getting replication configuredYou see the DFSR folders get created on the other end, but nothing stagesFinally you get EventID 4312:
The DFS Replication service failed to get folder information when walking the file system on a journal wrap or loss recovery due to repeated sharing violations encountered on a folder. The service cannot replicate the folder and files in that folder until the sharing violation is resolved.  Additional Information:  Folder: F:\Users$\user.name\Desktop\Random Folder Name\  Replicated Folder Root: F:\Users$  File ID: {00000000-0000-0000-0000-000000000000}-v0  Replicated Folder Name: Users  Replicated Folder ID: 33F0449D-5E67-4DA1-99AC-681B5BACC7E5  Replication Group…

Fixing duplicate SPNs (service principal name)

This is a pretty handy thing to know:

SPNs are used when a specific service/daemon uses Kerberos to authenticate against AD. They map a specific service, port, and object together with this convention: class/host:port/name

If you use a computer object to auth (such as local service):
MSSQLSVC/tor-sql-01.domain.local:1433

If you use a user object to auth (such as a service account, or admin account):
MSSQLSVC/username:1433

Why do we care about duplicate SPNs? If you have two entries trying to auth using the same Kerberos ticket (I think that's right...), they will conflict, and cause errors and service failures.

To check for duplicate SPNs:
The command "setspn.exe -X

C:\Windows\system32>setspn -X
Processing entry 7
MSSQLSvc/server1.company.local:1433 is registered on these accounts:
CN=SERVER1,OU=servers,OU=resources,DC=company,DC=local
CN=SQL Admin,OU=service accounts,OU=resources,DC=company,DC=local

found 1 groups of duplicate SPNs. (truncated/sanitized)

Note that y…

Logstash to Nagios - alerting based on Windows Event ID

This took way longer than it should have to get going...so here's a config and brain dump...

Why?
You want to have a central place to analyze Windows Event/IIS/local application logs, alert off specific events, alert off specific situations.  You don't have the budget for a boxed solution.  You want pretty graphs.  You don't particularly care about individual server states.  (see rationale below - although you certainly have all the tools here to care, I haven't provided that configuration)

How?
ELK stack, OMD, NXlog agent, and Rsyslog.  The premise here is as follows:

Event generated on server into EventLogNXlog ships to Logstash inputLogstash filter adds fields and tags to specified eventsLogstash output sends to a passive Nagios service via the Nagios NSCA outputThe passive service on Nagios (Check_MK c/o OMD) does its thing w. alerting
OMD
Open Monitoring Distribution, but the real point here is Check_MK (IIRC Icinga uses this...).  It makes Nagios easy to use and main…