Skip to main content

Our (contextual) approach to monitoring...

Let me preface this with a disclaimer: I am not the creator/inventor of any of these ideas, this is just a combination of Google and the context of our environment.  For all I know people have been doing this for ages...it's new to me!

Our context
Our 'prod' environment is in a managed hosting context for various reasons (compliance, past decisions, etc).  This means that from a 'traditional' monitoring point of view we are covered.  Anything beyond that we're on our own.  Also, we are 95% Windows.

What is monitored today: Host/svc, host/svc, ports alive, HTTP GET = XYZ, is CPU crazy, is RAM maxed, etc...
Picking up what's left: Logs! Centrally aggregate all logs and query that pile of data to get alerts.  i.e. the services/applications we run (written in-house) dump stuff to the event log, among other places.

What are our goals?

  1. Get visibility into what our svs/apps are doing - know (be alerted) that svcX is dumping errors right now, not a week from now when something's gone wrong and someone finally looks at the event log
  2. Reduce the time required to troubleshoot things like firewall/load balancer logs (right now a lot of grep)
  3. Know what's going on inside the svcs/apps so we have insight into svc/app health, what our customers are doing/not doing, etc
  4. Give EVERYONE in the company this kind of visibility through 'public' dashboards

Note that all of these goals line up with 'better the customer experience' & 'improving our flow'.

What do we have to work with?

  • Windows, SQL, IIS, Windows services, BizTalk
  • Linux - a few Apache proxies, other Linux applications
  • Load balancers & firewalls
  • RSyslog
  • OMD/Check_MK (Nagios)

The tools
So what we are going to try and build is this:

  • NXlog (running on each server) dumps event logs to central syslog
  • Firewalls/load balancers/etc already dump to syslog
  • Central syslog forwards to ELK stack (Elasticsearch, Logstash, Kibana) and does 'permanent' archiving
  • Failure tolerant & performant ELK stack stores X amt of logs
  • Elasticsearch queries -> Nagios for alerting (pagerduty gets thrown around a lot, have to investigate)
How does this help?
Well first off, we have pretty much nothing in this department today, so anything is better than nothing.  There are obvious caveats to that rule, but really, 'overdue' is a word that would not be totally out of place for us.

Because we've relied on our hosting provider for so much, it's made us somewhat complacent, always good to be on your toes for this kind of thing.

One last point...we are pretty budget-constrained, so 'real' insight tools are out of reach.  We have the competance/expertise to pull this off, even if nobody here has actually done this before.

Is this the right way to go about it?
From what I have read...it's A way to go about it.  Is it THE way?  I don't think there is a rubber stamp answer to this, but the ELK stack is rapidly becoming a go-to tool.  Nagios/Zabbix and the like are probably not going away, and syslog is a standard we can all live with.  NXlog is lightweight, so acceptable.  Windows eventlog is Windows eventlog.

Ideally we'd start looking at WMI queries and whatnot, but for 'out of the box' least path of resistence, this gives us the logging visibility we need.  Alerting on ES queries...others are doing this, so at least we know it's not science fiction.

Future plans
The future goal is to re-code our services/applications to dump metrics to something like Bucky/StatsD, then on to Graphite, but this is a fairly large task.  WMI queries hopefully will be the path we take to get Windows performance stats into Graphite.

If you feel there is a better way, or would like to discuss, comment below.

Comments

Popular posts from this blog

DFSR - eventid 4312 - replication just won't work

This warning isn't documented that well on the googles, so here's some google fodder:


You are trying to set up replication for a DFS folder (no existing replication)Source server is 2008R2, 'branch office' server is 2012R2 (I'm moving all our infra to 2012R2)You have no issues getting replication configuredYou see the DFSR folders get created on the other end, but nothing stagesFinally you get EventID 4312:
The DFS Replication service failed to get folder information when walking the file system on a journal wrap or loss recovery due to repeated sharing violations encountered on a folder. The service cannot replicate the folder and files in that folder until the sharing violation is resolved.  Additional Information:  Folder: F:\Users$\user.name\Desktop\Random Folder Name\  Replicated Folder Root: F:\Users$  File ID: {00000000-0000-0000-0000-000000000000}-v0  Replicated Folder Name: Users  Replicated Folder ID: 33F0449D-5E67-4DA1-99AC-681B5BACC7E5  Replication Group…

Fixing duplicate SPNs (service principal name)

This is a pretty handy thing to know:

SPNs are used when a specific service/daemon uses Kerberos to authenticate against AD. They map a specific service, port, and object together with this convention: class/host:port/name

If you use a computer object to auth (such as local service):
MSSQLSVC/tor-sql-01.domain.local:1433

If you use a user object to auth (such as a service account, or admin account):
MSSQLSVC/username:1433

Why do we care about duplicate SPNs? If you have two entries trying to auth using the same Kerberos ticket (I think that's right...), they will conflict, and cause errors and service failures.

To check for duplicate SPNs:
The command "setspn.exe -X

C:\Windows\system32>setspn -X
Processing entry 7
MSSQLSvc/server1.company.local:1433 is registered on these accounts:
CN=SERVER1,OU=servers,OU=resources,DC=company,DC=local
CN=SQL Admin,OU=service accounts,OU=resources,DC=company,DC=local

found 1 groups of duplicate SPNs. (truncated/sanitized)

Note that y…

Logstash to Nagios - alerting based on Windows Event ID

This took way longer than it should have to get going...so here's a config and brain dump...

Why?
You want to have a central place to analyze Windows Event/IIS/local application logs, alert off specific events, alert off specific situations.  You don't have the budget for a boxed solution.  You want pretty graphs.  You don't particularly care about individual server states.  (see rationale below - although you certainly have all the tools here to care, I haven't provided that configuration)

How?
ELK stack, OMD, NXlog agent, and Rsyslog.  The premise here is as follows:

Event generated on server into EventLogNXlog ships to Logstash inputLogstash filter adds fields and tags to specified eventsLogstash output sends to a passive Nagios service via the Nagios NSCA outputThe passive service on Nagios (Check_MK c/o OMD) does its thing w. alerting
OMD
Open Monitoring Distribution, but the real point here is Check_MK (IIRC Icinga uses this...).  It makes Nagios easy to use and main…