Skip to main content

TFS & Go: April update - no more Chef (for now)

Sooo, we dropped Chef from our lineup.  There were a number of good reasons to do so, such as:

  1. To properly get the most out of Chef in a Windows environment, you really need DSC (desired state configuration).  We are not using DSC at this time (on the to-do list).
  2. We were starting to twist it into doing deployment tasks - this is not appropriate (such as 'stop the AppPool, deploy, start the AppPool').
We found a way to do the same thing via (easy?) Powershell scripts, and modules!  Yes, we have really started having fun.  Not only that, but all of this is in TFS and part of our deployment chain.

Another big step is moving all of our pipelines into templates, and introducing a development process around new template features/fixes.  This means that keeping track of pipelines is really easy for us, and much easier for the devs to deal with from a requirements standpoint.

Further - we've gotten everyone to agree to a naming convention so our template model works, plus they are now mandated to include at least a basic Nunit test - the default WebApp templates build and deploy the App plus tests!

Templating looks like this...

  1. AppName-Build
    1. Build the App, export build artifact
    2. Build the Tests, export build artifact
  2. AppName-QA
    1. Deploy the App using our IIS functions module
    2. Deploy/Execute the Tests, export test artifact
  3. AppName-UAT
    1. Same as QA, same into Prod

Publicizing the process

And we publicize the progress/status of all this using a custom version of this: https://github.com/LateRoomsGroup/GoDashboard

We have it as part of our Ops dashboard cycle, plus a dedicated TV outside of the CTO's office (near the dev team) displays this.

It looks like this: (so handy that right now everything is green!)


Long Powershell scripts follow...


Sorry that this is so long and broken up...let me know if you want more info.

Here's what we're using to control Application Pools - and the reason we're doing this is that we ran into issues with files being locked during deploy time (we narrowed it down to the AppPool).

Stop-RemoteAppPool $projectName $targetServer

That is our module coming into play...here's what that does...
 # Wrapper function for Set-RemoteAppPool
function Stop-RemoteAppPool {
    param(
        [Parameter(Position=1)][string]$appPoolName = $(Throw "Required parameter missing: appPoolName"),
        [Parameter(Position=2)][string]$serverName  = $(Throw "Required parameter missing: serverName")
    )
   
    Set-RemoteAppPool $appPoolName $serverName Stop
}
And 'Set-RemoteAppPool'...(is a long one - the 'Add-CustomModule' piece just does a 'load if not loaded' kind of thing)

# This is the main piece of the module
function Set-RemoteAppPool {
    param(
        [Parameter(Position=1)][string]$appPoolName = $(Throw "Required parameter missing: appPoolName"),
        [Parameter(Position=2)][string]$serverName  = $(Throw "Required parameter missing: serverName"),
        [Parameter(Position=3)][string]$action      = $(Throw "Required parameter missing: action")
    )
    
    # Ensure the WebAdministration module is loaded
    Import-Module $PSScriptRoot/moduleFunctions.psm1
    $exportFunctions = "function Add-CustomModule { ${function:Add-CustomModule} }"

    #######################################################################################################
    # We are about to enter the remote system...
    Invoke-Command -ArgumentList $appPoolName,$exportFunctions,$action -ComputerName $serverName -ScriptBlock {
        # Create the Functions (cmdlets) on the remote system
        param($appPoolName,$importFunctions,$action)
        . ([ScriptBlock]::Create($importFunctions))
        
        # Ensure the WebAdministration module gets loaded on the remote system
        Add-CustomModule WebAdministration

        # Get the current state of the target appPool
        $appPoolState = (Get-WebAppPoolState $appPoolName).Value

        # Action the appPool based on parameter#3 (start)
        if ($action -eq 'Start') {
            # Action:Start - Start the AppPool from a stopped state
            if ($appPoolState -eq 'Stopped') {
                Start-WebAppPool $appPoolName
                # 3 seconds should be tons of time for an AppPool to start from a stopped state
                # while still being relatively short from an automation cycle time perspective
                Start-Sleep -s 3
                if ((Get-WebAppPoolState $appPoolName).Value -eq 'Started') {
                    Write-Host "AppPool $appPoolName successfully started."
                }
                else {
                    # It could be in a starting or unknown state...since we're failing out
                    # here, let's output the current state of the appPool for debug purposes.
                    $currentAppPoolState = (Get-WebAppPoolState $appPoolName).Value
                    Throw "AppPool $appPoolName failed to start! `r`nIt's current state is $currentAppPoolState."
                }
            }
        }
        # Action the appPool based on parameter#3 (stop)
        elseif ($action -eq 'Stop') {
            # Action:Stop - Stop the AppPool from a started state
            if ($appPoolState -eq 'Started') {
                Stop-WebAppPool $appPoolName
                # 2 seconds should be plenty of time for an AppPool to enter a stopped state
                Start-Sleep -s 2
                if ((Get-WebAppPoolState $appPoolName).Value -eq 'Stopped') {
                    Write-Host "AppPool $appPoolName successfully stopped."
                }
                else {
                    # Wait 3 more seconds - longer than this and it's broken
                    # Perhaps there could be lots of production connections, so let's
                    # allow for that - an additional 3 seconds.
                    Start-Sleep -s 3
                    if ((Get-WebAppPoolState $appPoolName).Value -eq 'Stopped') {
                        Write-Host "AppPool $appPoolName successfully stopped."
                    }
                    else {
                        # It could be in a stopping or unknown state...since we're failing out
                        # here, let's output the current state of the appPool for debug purposes.
                        $currentAppPoolState = (Get-WebAppPoolState $appPoolName).Value
                        Throw "AppPool $appPoolName failed to stop! `r`nIt's current state is $currentAppPoolState."
                    }
                }
            }
        }

    } # End ScriptBlock - exiting the remote system...
    #######################################################################################################

Comments

Popular posts from this blog

DFSR - eventid 4312 - replication just won't work

This warning isn't documented that well on the googles, so here's some google fodder:


You are trying to set up replication for a DFS folder (no existing replication)Source server is 2008R2, 'branch office' server is 2012R2 (I'm moving all our infra to 2012R2)You have no issues getting replication configuredYou see the DFSR folders get created on the other end, but nothing stagesFinally you get EventID 4312:
The DFS Replication service failed to get folder information when walking the file system on a journal wrap or loss recovery due to repeated sharing violations encountered on a folder. The service cannot replicate the folder and files in that folder until the sharing violation is resolved.  Additional Information:  Folder: F:\Users$\user.name\Desktop\Random Folder Name\  Replicated Folder Root: F:\Users$  File ID: {00000000-0000-0000-0000-000000000000}-v0  Replicated Folder Name: Users  Replicated Folder ID: 33F0449D-5E67-4DA1-99AC-681B5BACC7E5  Replication Group…

Fixing duplicate SPNs (service principal name)

This is a pretty handy thing to know:

SPNs are used when a specific service/daemon uses Kerberos to authenticate against AD. They map a specific service, port, and object together with this convention: class/host:port/name

If you use a computer object to auth (such as local service):
MSSQLSVC/tor-sql-01.domain.local:1433

If you use a user object to auth (such as a service account, or admin account):
MSSQLSVC/username:1433

Why do we care about duplicate SPNs? If you have two entries trying to auth using the same Kerberos ticket (I think that's right...), they will conflict, and cause errors and service failures.

To check for duplicate SPNs:
The command "setspn.exe -X

C:\Windows\system32>setspn -X
Processing entry 7
MSSQLSvc/server1.company.local:1433 is registered on these accounts:
CN=SERVER1,OU=servers,OU=resources,DC=company,DC=local
CN=SQL Admin,OU=service accounts,OU=resources,DC=company,DC=local

found 1 groups of duplicate SPNs. (truncated/sanitized)

Note that y…

Logstash to Nagios - alerting based on Windows Event ID

This took way longer than it should have to get going...so here's a config and brain dump...

Why?
You want to have a central place to analyze Windows Event/IIS/local application logs, alert off specific events, alert off specific situations.  You don't have the budget for a boxed solution.  You want pretty graphs.  You don't particularly care about individual server states.  (see rationale below - although you certainly have all the tools here to care, I haven't provided that configuration)

How?
ELK stack, OMD, NXlog agent, and Rsyslog.  The premise here is as follows:

Event generated on server into EventLogNXlog ships to Logstash inputLogstash filter adds fields and tags to specified eventsLogstash output sends to a passive Nagios service via the Nagios NSCA outputThe passive service on Nagios (Check_MK c/o OMD) does its thing w. alerting
OMD
Open Monitoring Distribution, but the real point here is Check_MK (IIRC Icinga uses this...).  It makes Nagios easy to use and main…