Skip to main content

TFS & GO & Chef, oh my: Onboarding update!

Well, as of yesterday afternoon, we have one website and one database actively onboarded into our automation system.  Unbelieveable amount of work, but totally worth it.

What was chosen as the candidate for this pilot is our primary web front-end (referred to as WebUI here on out) and (one of) it's accompanying databases ('the database').  I'll try and describe what we're doing and how it's getting accomplished.

Pipeline overview of today, anyways...
  • Database (pipeline group)
    • Database Build (pipeline)
      • Builds the SQL scripts - we have pre/deploy/post scripts - pre/post are data transformation, 'deploy' is functionality-related.  Dev opted to use the VS equivalent of SQL compare - we will be using the Prod database as the true reference.  The 'deploy' scripts are automatically generated - pre/post scripts are done by hand and are optional.
    • Database Deploy (pipeline)
      • Snapshot script - 'reverts' the snapshot, takes another snapshot - after the sprint is over we 'commit' the snapshot data (I am not used to SQL snapshot terminology, sorry), so when the next sprint starts there is no snapshot present - all deployments during the sprint effectively 'start at the beginning' - hence QA injection in smoketest....
      • Execute SQL scripts in order - we are using the SQLPS module to do all our SQL commands/executions - I will do another post on that
  • WebUI (pipeline group)
    • WebUI Build (pipeline)
      • Builds from a VS setup project
      • Moves files/configs around
      • Imports the build output as a Go build artifact
    • WebUI Deploy (pipeline)
      • DeployFiles
        • Fetches the Go build artifact
        • Zips the files, sends to server(s), sends environment-appropriate configuration files
      • Chef run
        • Runs 'chef-client' on target server(s) - just a website restart/appPool recycle right now
      • Smoketest
        • Dev coded up an Nunit/Selenium test that does two things:
          • Ensures basic functionality is present (login, adding users, etc)
          • Injects 'QA data' into the database via the functionality test

Lessons learned

  • There is always a struggle between doing it right and doing it on time, and having people on the team who are on both sides can be a benefit.  
    • All on one side tends to either not get anything done or not get anything done well.
    • Caveat being that you must understand 'what's fundamental to get right from the start?' and be able to live with non-fundamental garbage for a time.
      • Parameterizing your scripts is fundamental (i.e. building blocks)
      • Having the scripts execute in 30s vs. 2m is not fundamental (i.e. performance improvements)
  • If you find yourself with a huge might be doing it wrong.
    • We were bashing our heads trying to figure out how to do the whole 'build repository' thing...ya, there's something called a build artifact that Go 100% manages for you.
    • We had initially been 'pro-Git' because the internet said so.  We are now 100% TFS (ok, except for Chef) because it just wouldn't work trying to mash both together.
  • Keep up the enthusiasm!  Eventually it'll catch on.
    • Keep up the momentum and people will get caught Katamari Damacy

Next steps

  • We have the remainder of the sprint to really see how this is to live with...and to move into the other environments (past QA), but everybody is pretty excited.  They've already come up with another project to onboard - this one right from Day1!
  • Communication of these changes to the rest of the team is also big - this is a totally new way of thinking, so we need to ensure everyone is aware of the benefits/changes to process.
  • A lot of cleanup to do with our deployment scripts, pipeline config (some jobs still called 'DefaultJob') - but it's just polishing/templating for the most part - at least right now.  We also still have to onboard the UAT/Prod environments.
  • For funzies we put together a Selenium Grid system...annnd now it's a thing.  Dev is working on getting Appium integrated.  We have two nodes for web browsers already.
  • Also...we have an automation person starting on Monday - should be pretty exciting to see where that goes.  

Hectic times ahead, but super exciting!  Things are going to be mega-hyperbole-busy for the next few months...colo move...automation...two production environment moves...more automation...monitoring...


Popular posts from this blog

DFSR - eventid 4312 - replication just won't work

This warning isn't documented that well on the googles, so here's some google fodder:

You are trying to set up replication for a DFS folder (no existing replication)Source server is 2008R2, 'branch office' server is 2012R2 (I'm moving all our infra to 2012R2)You have no issues getting replication configuredYou see the DFSR folders get created on the other end, but nothing stagesFinally you get EventID 4312:
The DFS Replication service failed to get folder information when walking the file system on a journal wrap or loss recovery due to repeated sharing violations encountered on a folder. The service cannot replicate the folder and files in that folder until the sharing violation is resolved.  Additional Information:  Folder: F:\Users$\\Desktop\Random Folder Name\  Replicated Folder Root: F:\Users$  File ID: {00000000-0000-0000-0000-000000000000}-v0  Replicated Folder Name: Users  Replicated Folder ID: 33F0449D-5E67-4DA1-99AC-681B5BACC7E5  Replication Group…

Fixing duplicate SPNs (service principal name)

This is a pretty handy thing to know:

SPNs are used when a specific service/daemon uses Kerberos to authenticate against AD. They map a specific service, port, and object together with this convention: class/host:port/name

If you use a computer object to auth (such as local service):

If you use a user object to auth (such as a service account, or admin account):

Why do we care about duplicate SPNs? If you have two entries trying to auth using the same Kerberos ticket (I think that's right...), they will conflict, and cause errors and service failures.

To check for duplicate SPNs:
The command "setspn.exe -X

C:\Windows\system32>setspn -X
Processing entry 7
MSSQLSvc/ is registered on these accounts:
CN=SQL Admin,OU=service accounts,OU=resources,DC=company,DC=local

found 1 groups of duplicate SPNs. (truncated/sanitized)

Note that y…

Logstash to Nagios - alerting based on Windows Event ID

This took way longer than it should have to get here's a config and brain dump...

You want to have a central place to analyze Windows Event/IIS/local application logs, alert off specific events, alert off specific situations.  You don't have the budget for a boxed solution.  You want pretty graphs.  You don't particularly care about individual server states.  (see rationale below - although you certainly have all the tools here to care, I haven't provided that configuration)

ELK stack, OMD, NXlog agent, and Rsyslog.  The premise here is as follows:

Event generated on server into EventLogNXlog ships to Logstash inputLogstash filter adds fields and tags to specified eventsLogstash output sends to a passive Nagios service via the Nagios NSCA outputThe passive service on Nagios (Check_MK c/o OMD) does its thing w. alerting
Open Monitoring Distribution, but the real point here is Check_MK (IIRC Icinga uses this...).  It makes Nagios easy to use and main…