Posts

Showing posts from November, 2014

TFS & GO & Chef, oh my: Part 11 - The deploy process

Trying to work out a few things right now: Where do I draw the line between 'do this in Chef' and 'do this from Go'? How do you 'template' Chef recipes? Where do I draw the line between 'do this using Chef' and 'do this using Chef but via PowerShell'? Regarding #1 , I can do pretty much everything in Chef...but then you lack the visibility and variables from Go.   This should be cleared up by figuring out exactly what we want the Go deploy pipeline to look like - where do we need the milemarkers to sit?  Or maybe we just need one stage with really detailed error handling? Regarding #2 , right now I have a cookbook for 'WebApp1', and it will only work for WebApp1, not WebApp2, or 10, etc.  I imagine there is a simple method here.  (might have found it: #{cookbook_name}) Regarding #3 , it really comes down to 'I know this can be done (and how to do it) in PS, but figuring it out in Chef will take time'.  For the P

TFS & GO & Chef, oh my: Part 10 - It's alive! ...ish...

The build process is now 100% functional! Except I broke it.  For some reason the GO agent started only using the active directory NicelyFormattedName...which of course broke the auto-login, because the Linux-side name is all lowercase.  After changing the environment variable and deleting every cache I could find, ended up just changing the AD account to all lowercase.  Fixed.  :S Now it's fully functional!  Except the additional test branches dev gave us won't build like the first one. Good news is that we are now Chefing - and this part is what I've really been looking forward to.  After a LOT of reading, it seems this is the recommended practice: Use a 'roles' cookbook - because versioning EVERYTHING is super important, and roles are not (at this time) versioned Environments allow you to restrict versions Try and keep things as simple as possible - limit the 'blast radius' of bad changes Here's how we are (at this time) structuring thin

TFS & GO & Chef, oh my: Part 9 - Lesson: Powershell & the GO Agent & ssh

Long story short, even if you have the GO Agent service (Windows) running as the correct user, commands will still pass as 'SYSTEM'.  Could not figure out how to change this, or get SYSTEM to use the id_rsa for the correct user (3+ hours of Google and trial/error).  So, here's an example command, old vs. new: OLD cmd /c powershell c:\scripts\BuildFileMove.ps1 %GO_PIPELINE_NAME% NEW cmd /c powershell $pass = convertto-securestring %GO_SHELL_PASS% -asplaintext -force;  $mycred = new-object -typename System.Management.Automation.PSCredential -argumentlist "%GO_SHELL_USER%",$pass; invoke-command -computername localhost -ScriptBlock {c:\scripts\BuildFileMove.ps1 %GO_PIPELINE_NAME%} -credential $mycred The stuff that was failing, and how we figured it out: We're using Git for Windows (the command line, full bash integration), and trying to do push/pull The powershell scripts all run fine when done via the ISE/PS prompts, but when you try to get the

Your site is hacked - how to handle the situation properly

One of my client's sites was hacked the other day - new experience for me, so thought I'd share in a somewhat linear format. The developer who wrote the site did so using Drupal, and apparently a few weeks ago a pretty big vulnerability was found (something to do w. PHP). The site is hosted by Network Solutions (UNIX web host package) - (NS hereforth). There is no customer data held on the site, it's just informational & a few PDFs NS detected that the site was hacked, shut it down, and sent an email to the primary contact. Primary contact was someone who used to manage the account - I was never made primary (for whatever reason - obviously THIS is why having it set correctly is important) Client called saying their customers were reporting an offline site. The NS splash page simply said 'invalid domain'. I called NS after triple-checking NS account status & domain validity. NS confirmed account status was OK, transferred me to web host division -

TFS & GO & Chef, oh my: Part 8 - How we'll handle trunks/branches, build/git push oopsie, SSH keys auth

Because of how I'd planned on setting up pipelines, the question arose: What do you do with branches? With that process, you would have to create a fresh pipeline for each branch - the idea being your build repo project list is 'clean' for each project.  Well after discussion, this didn't make much sense, so we now will simply have a pipeline per project (app/svc/site), and we simply adjust the pipeline material project path on each build request.  In other words, from a build/deploy point of view - we don't care if it's a trunk or branch request, just give us the project path and we'll build it.  Simple process change! The 'downside' to this is if you need to push out an emergency build, it'll 'interrupt' anything currently running in the environment chain.  This is primarily due to us only having one of each environment.  Eventually, with rapid build/teardown environments, this will go away. It also became clear as I was testing t

TFS & GO & Chef, oh my: Part 7 - Minor? setback

So due to a number of reasons I won't disclose, the process needs to change.  Slightly more manual, but if GO offers the option, should be mitigated reasonably well. Essentially the applications use stored procedures and whatnot in the DB layer - these are shared to recycle code.  Unfortunately it means that we cannot simply have a pipeline for each app/svc - items must be updated together (and taken offline to do so).  Something for our architecture review board meetings... Anywho...here's the new plan: New sprint starts - PM informs us that XYZ apps/svcs are being updated We create a single new pipeline for the SQL scripts Then create a 'sprint dashboard' page with the relevant individual pipelines When they are ready to build, we can do each item individually (or together) When they are ready to deploy, we run the SQL pipeline This stops all relevant services/sites and runs the SQL scripts The buildmaster then runs the other (remaining) pipelines on the

TFS & GO & Chef, oh my: Part 6 - Just legwork now...

Here's a broad overview of the process.  We are surprisingly close... Code for the project (webapp/website/windows service) sits in TFS The GO build agent is on a Windows server w. Visual Studio - it builds the project Have the config files changed?  If so, this process is still manual - thankfully rare, should just be an onboarding task On the build agent, git commit the built files and config files After the commit, get your latest version number and apply a label into the TFS project noting the associated git hash (so when troubleshooting, we say 'it's this git hash version we're running', and they say, 'oh, what label in TFS?' - now we have an answer!  (co-worker came up with this idea) Git houses built project files and project config files Git also houses the Chef config files Chef will be run to re-configure IIS (if necessary), operate services/appPools, and...dun dun dun...run a git fetch! (or git clone, not sure yet)  Bam!  New code! Ext

TFS & GO & Chef, oh my: Part 5 - Build repo solved!

One of the senior tech folks had some great input for how the process should work.  I'd been wondering about what to use as a build repository - had hoped we could use Git - and sure enough he suggested using Git! Before: Project is built (raw files) Files are zipped Zip filename and an injected file have version#s Update symlinks to latest/previous? Chef handles IIS/win svcs Deploy latest zip via Chef Inject appropriate configs After: Project is built (raw files) Files are checked in to Git Chef handles IIS/win svcs Git fetch for latest Inject appropriate configs So Chef will be handling IIS configuration & services, and then just running a git fetch (or git clone for new installs) to update the files.  Because each step of the pipeline is configurable, we don't even need to 'figure out' what environment we're deploying to.  So this means when we deploy to QA, we specify to copy the QA scripts. Another interesting concept is 'th

TFS & GO & Chef, oh my: Part 4 - Config dilemma resolved?

Stumbled across this little gem: 'configSource' It looks like this: web.config:  <connectionStrings configSource="config\connectionStrings.config"/> config\connectionStrings.config:  <connectionStrings> <add name="DBname" connectionString="Database=DBname;Server=DEVSERVER\DB;user=pp.AccountName;password=DEVpassword;" providerName="System.Data.SqlClient"/> </connectionStrings> What we'll do is create three files: (DEV|UAT|PROD).connectionStrings.config At deployment time the script will determine which file is correct (probably env.var), rename it (strip out env), and delete the other files.  Not pretty, but should be a lot easier to live with than copies of the entire file.  The devs should be amenable to a one-time change of the config file structure.  For the time being, this will be a manual process (it already is), but it should lend itself to automation in future. Stuff left to fig

TFS & Jenkins & Chef, oh my: Part 3 - No more Jenkins?

Thoughtworks' GO is up and running - it is wonderful.  A bit of a learning curve when working with TFS (which is odd, haha or not), but the team seems to really like the concept so far. We also sorted out the versioning question...or rather...the issue was handed back to us with 'we don't care about versioning, you deal with it'.  Okie dokie! The config files themselves I think we'll just transform (somehow) at bundling time (turn the dev config file into env.web.config files), then delete extras upon Chef-ing and rename.   Or something. About where to store the Chef configs - for now we'll just do local repo, but pretty sure we'll want a Git server Confirmed with the dev team, shouldn't be a big deal to migrate to TFS 2013, as long as the client-side patches are in place (SP1 and a GDR patch for VS2010) It's now simply down to working out the process, then manhandling tools to fit it.  A textbook CD implementation this is not, but in the le

TFS & Jenkins & Chef, oh my: Part 2 - POC detail

The goal is simplify the POC...I think my initial vision will be a bit much to pull off 'well' inside a month.  Some key points like packaging/dependency management, pipelining, etc, will have to be set aside. POC Scenario: Build and deploy a webapp/svc project (it just needs to deploy with no human intervention) New code checked in; Orchestrator sees new code and kicks off build script Build command runs, creates zip file & moves it into Jenkins build# folder? Build successful? Orchestrator runs deploy script (otherwise notify failure) Hey Chef, please install the updated project (build) into QA Ok, let's check out the project definition: Get latest build.zip Stop services Copy/extract/overwrite files Copy new configuration files Start services Let's test! serverspec? Report success (pipeline?) Bonus Scenario: Rollback Something is not caught by testing, we need to roll back ASAP. Authorized person chooses 'rollback to previous ve