Posts

New learning experience begins - South Bend lathe!

Image
Something I've wanted to do for a looooooong time finally happened - picked up a metal lathe!  With the included tooling and the condition it was in, it was a pretty good deal, and the owner threw in a whole pile of extras when he saw I was genuinely interested/learning. This thing is ancient - 1921, the owner said - but he showed me it running, showed me projects he'd made on it.  It'll work just fine for learning - and it already works/has been restored, so don't need to invest time/$ in a restoration/repair. I am hoping to learn basic lathe operation, metrology, cutter grinding, basic tool-making, and make some neat projects/enhance existing projects. It came with: Lathe itself is a South Bend 25-Y (9" x 36") - fully restored, painted, ways in good shape, fully oiled 120v motor w. v-belts - I'll be getting some link-belts to replace them Metal table All change gears (Imperial only, but that's not surprising for a lathe this old) ...

SonarQube & TeamCity & RDS

Our environment All-AWS Mysql RDS for DB Running the sonar java app off our Linux Teamcity server on port 9000 (as a service) Teamcity agents are Windows boxes w. (cinst -y JDK8) Active Directory on 2012R2 Sonar 6.2 (probably a bad idea) Some key points to getting things actually running... RDS Create a new parameter group in RDS that matches your RDS instance type (e.g. mysql 5.6) Set these params ( because trial and error is fun with a 1GB repo scan ) max_allowed_packet: 32768000 innodb_log_file_size: 1024217728 I also set some stuff via MySQL Workbench as per their documentation (collation stuff) USE sonar; ALTER DATABASE sonar CHARACTER SET utf8 COLLATE utf8_bin; LDAP Was so pleased that after hours of googling, found something that 'just worked'...I wish I'd saved the link with this.  It was a StackOverflow result, of course.  Tried 2-3 that did not work... This config is obviously wide-open, but only accessible via our VPN, so accepta...

Facilitated my first "Spotify" team health check - with a semi-remote team!

As part of bringing the ( remote ) teams together, we're going to try 'team health checks' a la Spotify-style.  The goal is threefold: Provide a venue for teams to communicate together, a safe forum to let their voice be heard. Provide senior leadership with a "safe" dataset that gives them 'the feeling' of the team Provide action items to address the low points The initial team meeting to present the concept went well - everyone seemed genuinely interested, and a few team members had done similar things with previous employers. So today was the first health check...here's how it went down... We asked the team members to review the list of topics ( cards ) before the meeting, and this was the agenda ( 2-hour time slot ): Start the meeting focusing on my role as facilitator Go through each topic - any we want to remove? Go through them again, but vote and discuss vote results Look at the results, pick focus area(s) Determine action item...

2012R2, IIS, and KB3052480 - random ASP.NET app domain restarts

I will leave out the 'how we came to this conclusion' part, but suffice to say it took 3 people the better part of 2 full days. There is a, in my humble opinion, rather gaping bug in 2012R2 IIS that manifests itself like this: You have many file changes in the web root, but they are contained to the 'safe' directories (i.e. not in /bin or whatnot) In theory, IIS doesn't care and you go about your merry way - because it only monitors /bin and /app_data (iirc?) and some config files You find yourself chasing spikes in your (New Relic) data, traces that make no sense (the transactions that get called the most have a lot of looong traces) Nobody understands what's going on, because no errors are being thrown You can verify that you're seeing a lot of ASP.NET app domain restarts by looking at perfmon: ASP.NET Applications - (your instance) - Application Lifetime Events What you should be seeing upon a legit recycle is one startup event and one sh...

Why do we measure?

I've been reading ' How to measure anything - Finding the value of intangibles in business, 3rd edition ' (you should read this, too) and it has completely revised how I approach problems. The biggest thing to me is this:  My understanding of WHAT a measurement is and WHY we measure things has completely and fundamentally changed. Pretend someone asked you 'what is a measurement?'.  Now do the same thing for 'why do we measure'.  Answer the questions then scroll down. Compare to this: What: A measurement is the reduction of uncertainty based on observations. Why: Because we are trying to reduce the risk in a decision we need to make. I am (badly) paraphrasing/summarizing...but that's the gist.  A reduction of uncertainty - NOT the elimination of uncertainty.  And measurement now has a PURPOSE - to help you reduce risk when making decisions.   It starts with knowing what decision you need to make. So simple, but so powerful.  You...

When prod is down, what do you see?

I get this picture in my head of all our users going to load our app (you know, the one they pay for), and they just get a spinning load icon.  And they frown, and try again.  Then they start getting all angry and blaming the app for making them late, and their mortgage, and all of life's problems...and then from that moment on they resent the app.  Other users simply sigh in defeat and walk away, always to remember this moment of the app letting them down. Kinda silly, but that's the best visualization of what goes on in my head when production's on fire.  The longer it takes to fix this, the more users get angry and/or depressed.

AWS Gotcha - IIS + ELB + HTTPS

So I've run into this twice now and have to write it down so I don't forget (or at least so future-me can google myself). We were using a standard AWS Elastic Load Balancer (ELB) to serve traffic to our API.  The API is served from IIS webservers via 80/443.  ELB is not checking back-end cert authenticity.  (the servers have a self-signed cert) We're trying to move toward more of a farming approach to servers vs. pets, and so decided to spin up a batch of new servers and migrate them over using the very fabulous Route 53 traffic policies.  It worked great! Until humans got involved. Since this is a new thing, I built one, tested it, then built the rest.  For whatever reason a seemingly minor change I made didn't make it back into the repo, and so the other 3 servers were built using the non-fixed configuration. We noticed in New Relic that of the four servers, only one was doing any heavy lifting - the other 3 were receiving almost the same amount of r...