Showing posts from January, 2015

DFSR - eventid 4312 - replication just won't work

This warning isn't documented that well on the googles, so here's some google fodder: You are trying to set up replication for a DFS folder (no existing replication) Source server is 2008R2, 'branch office' server is 2012R2 (I'm moving all our infra to 2012R2) You have no issues getting replication configured You see the DFSR folders get created on the other end, but nothing stages Finally you get EventID 4312: The DFS Replication service failed to get folder information when walking the file system on a journal wrap or loss recovery due to repeated sharing violations encountered on a folder. The service cannot replicate the folder and files in that folder until the sharing violation is resolved.    Additional Information:  Folder: F:\Users$\\Desktop\Random Folder Name\  Replicated Folder Root: F:\Users$  File ID: {00000000-0000-0000-0000-000000000000}-v0  Replicated Folder Name: Users  Replicated Folder ID: 33F0449D-5E67-4DA1-99A

Add an RDM, get an error! (EqualLogic)

Google fodder. I was able to reproduce as follows: Take a VM that's already working, shut it down (in my case, 2012 R2). Add an RDM disk (using a new controller or not, both gave the error) Try to power up the VM. You will get an error: 38 (Function not implemented)  Goes on to talk about 'cannot open virtual disk or one of the snapshot disks it depends on' - even though no snapshots. Thankfully, even though the Google hits were few, I figured it out.  The issue? This is for moving our physical Veeam backup server into a VM.  For the B2D target I wanted to use an RDM that is on a different SAN than the VMs.  When I created the volume in the EqualLogic group manager I had the option of 512-byte sectors, or 4k sectors.  Figured, hey, 2012R2 can deal with 4k! Maybe it can, but ESXi 5.5.0 (update 2b?) sure didn't. I nuked the volume, recreated it with 512-byte sectors and it works no problem. FWIW, the array is running the latest firmware (at time of wr

Logstash configuration update - outputs and concluding note

Config files like this: 90-outputs_start.conf 91-outputs_elasticsearch.conf 92-outputs_nagios.conf 93-outputs_graphite.conf 99-outputs_end.conf Outputs 90-outputs_start output { Orrrrrganization. 91-outputs_elasticsearch # Old embedded ES #elasticsearch { #               host => "" #       #cluster => "elasticsearch" #       } # New ES cluster elasticsearch {         host => ""         cluster => "site.elk.elasticsearch"         } Just a simple ES output - note that I left in the old embedded config stuff for some reason.  Another reason to get these things into Git!! 92-outputs_nagios # This sends nagios output only if the service and status fields # are populated if [nagios_service] =~ /./ and [nagios_status] =~ /./ {         if "nagios_check_eventlog_biztalk" in [tags] {                 nagios_nsca {                                         host => "

Logstash configuration update - filters

Reviewing how we deal with filters in our (mostly) Windows environment into Logstash...inputs were already discussed, outputs will be discussed in another post.  I'll be sanitizing the info as best I can. Filters So our config file structure looks like this for filters: 01-filters_start.conf (how I keep the brackets organized) 02-filters_drop.conf (drop stuff first) 10-filters_grok.conf (fixing grokparsefailures to start) 15-filters_mutate.conf 20-filters_tagging.conf 30-filters_metrics.conf 89-filters_end.conf (bracket organization) Let's go through them... 01-filters_start filter { Yep, that's it.  Just organization. 02-filters_drop # drop esx syslog because vmware bugs #if [message] =~ /esx1/ { #       drop { } #} This was used because the ESX cluster continued dumping syslog messages way after I disabled sending for some reason, and it gave me a chance to see what a drop filter looks like.  Left in because maybe someone will find this handy.

Logstash configuration update - config files, inputs

As a fun addendum to the ELK posts, I'm posting up some of the updated config files in the hopes that it'll help some folk starting down the 'ELK in a Windows environment' path. Here's our config file structure: 00-inputs.conf (all inputs in here, really just two) 01-filters_start.conf (how I keep the brackets organized) 02-filters_drop.conf (drop stuff first) 10-filters_grok.conf (fixing grokparsefailures to start) 15-filters_mutate.conf 20-filters_tagging.conf 30-filters_metrics.conf 89-filters_end.conf (bracket organization) 90-outputs_start.conf 91-outputs_elasticsearch.conf 92-outputs_nagios.conf 93-outputs_graphite.conf 99-outputs_end.conf As part of my own self-review, I'll try to go through all of this, but I'll break it up into two posts - one on filters, one on outputs.  The remainder of this post will have config file breakout and Input section discussion. Config file breakout Since I'm still a Linux newb, my co-worker

Why you need centralized event logging (Windows)

It will hit you, eventually.  You'll be sitting at your desk browsing through Kibana data and suddenly realize that it's been WEEKS since you last opened event viewer on a server.  And waited for it to load.  And then had to activate filters.  And then discover that your log entry is on another server. Get the ELK stack.  Just try it out.  NXlog, ELK, and a browser.  Done.  You don't need to be a Linux guru (although some experience helps). It is a lovely feeling. Also, when people come to you and say, 'hey man, I love this new system - it saves me so much time! Thank you!', you'll be all 'cool man.  cool.'.  And you'll then also have that wonderful feeling that what you are doing actually matters.  Fun times!

Using the Chef IIS cookbook to build a site/appPool

Because the documentation is sparse, figured I'd post these up.  Spent way more time & trial/error than I should have had to!!  (I was going to post these weeks ago, but didn't get it working until the kindly devs at opscode resolved the bugs.  Note that the fix hasn't actually been published yet, but all this works with the repo linked from here: ) Create the directory: directory "C:\\Web\\Data\\#{cookbook_name}" do   action :create   recursive true   rights :full_control, "Administrators", :applies_to_children => true   rights :full_control, "domain\\username", :applies_to_children => true   rights :full_control, "domain\\group name", :applies_to_children => true   not_if {"C:\\Web\\Data\\#{cookbook_name}") } end Create an appPool:  iis_pool "#{cookbook_name}" do      runtime_version "4.0"      pipeline_mod

Adding cluster nodes, expanding storage - Elasticsearch

This is a copy/paste/censor of our wiki doc (that I wrote, and have permission to publish; the censoring might make stuff look off, sorry).  I left in the generic stuff because maybe we're doing something horribly wrong and a nice person will point it out.  Or maybe we're doing something that someone else will find inspiration in.  Who knows!  Gives some context anyways.  Hope this helps someone... Overview of adding a new node to the ES cluster. Template 1.     Deploy the current CentOS template 2.     Assign an ip address from IPadmin 3.     Create the A record in AD DNS now 4.     Add a 500GB disk, located on one of the VMFS-ES-x datastores 5.     Change the networking: §   /etc/sysconfig/network-scripts/ifcfg-eth0 (change IP) §   /etc/sysconfig/network (change hostname) 6.     Run updates: yum update 7.     Reboot ES Big Disk config If VM is already in place This will allow you to add the disk without rebooting: echo "- - -" > /

Usage experience with Elasticsearch (ELK) so far

We started looking for monitoring options after someone brought me the problem: 'WUG can't run scripts this long...what are our options?'  So naturally I said, 'well I'm sure Nagios can handle something like this' (because zero budget and prior experience). Getting it running wasn't an issue, but eventually I felt we needed more on the dashboard side, and started reading (though not quite agreeing with) the '#monitoringsucks' posts.  Finally came across OMD and tried it out - it's a user-friendly and functional implementation of everything I was looking for! Except eventually it wasn't.  And I started seeing amazing graphs (because ooh shiny) from the folks doing Logstash, then read about how centralized storage of logs is good/handy.  So hey, why not? It was really easy to get running, but learning the Logstash config stuff took some effort. Before: Windows event logs: Log on to server, open event viewer (or use RSAT) IIS logs: I