Posts

Making a case for a purchase to management

An issue I've come across many times now is how to properly present a purchase request to your manager/his boss. There are occasional managers who will just take your word for it, but some (hopefully most) will question the need for it. As a technical person, I find it really easy to get caught up in what something can do rather than why we need it in the first place. Case in point. An upgrade was required for some KVM servers - more RAM required - and I put through the request for purchase to my superior. He sent it over to the director, who immediately shot back an email requiring a really nice reason to approve it. After some futzing about via email (I was out of the office that day), it became clear that he was fine approving it, but we needed to make the case as to WHY he should approve it. Frankly we needed to prove it to ourselves first, but he was technically-competent and was not just going to stamp everything 'a-ok'. There was a clear reason to me and my co-w...

Lab setup 101

Lab setup I finally finished the cabling and BIOS finagling on the main ESXi box. Trying to run 20 SATA cables neatly in a small space is tricky business, I tell you, but in the end it turned out clean enough that the wiring should not restrict air flow too much. The motherboard is a Supermicro X8SAX. RAM is Kingston ValueRAM (6x4GB). CPU is an i7-950. For storage I have two PERC 6i cards and one PERC 5i (actually an LSI8408E). The 6i cards each have eight 2.5" 15k 73GB SAS drives in RAID10 and the 5i has four 1TB 7.2k SATA drives in RAID10. I will need to revisit the cooling inside, as the RAID cards get quite hot. Right now I've slung a fan behind them blowing out of the case. ESXi is running off of some Verbatim USB drives I got from newegg ('clip-it' model). I have just finished 'Maximum vSphere' by Eric Siebert and he gives a really cool method of creating the OS on these drives. Instead of installing manually, you use Workstation, connect the USB...

Lab update

Criminy, I've never had so many shipping boxes arrive in the course of a week. Cables are out for delivery today, I'm hoping we can get home in time to catch the UPS driver on his last round, as my wife did yesterday for the SAS drives. Speaking of which...the 2.5" SAS drives are SOLID little things. Lovely. I got them all mounted in the IcyDock enclosures and only broke one LED indicator...the manufacturing on the drive sleds is not perfect - but for the price, I'm not complaining. I will send them a note to see if I can order a spare sled (I have two spare drives after all). Support chat was very helpful - I can do an online RMA for the unit. Also, they provided me with the link for an individual tray: http://www.icydock.com/goods.php?id=125 Unfortunately the only place to offer it is amazon.com (does not ship to Canada) and one tray is $40. I'll pass!! Five or ten bucks would be ok...but $40 is nuts. Have to see about the RMA then. I need to check and...

Lab - Bright news! FC addition

Update: More Arstechnica goodness! Brocade provides free e-learning: http://www.brocade.com/education/course-schedule/index.page See Server Room thread here: http://arstechnica.com/civis/viewtopic.php?f=21&t=1138681&p=21409635#p21409635 Thanks to the good people in TheServerRoom@arstechnica, the lab has a wonderful new addition - FC! I have managed to acquire the following for a very low price: Brocade 3800 FC switch (2Gb, 16-port w. SFPs, all licensing) QLogic QLA2344/2342 HBAs (1 quad/4 dual, PCI-X) Very exciting! I have very little hands-on experience with FC, and seeing as how it's an industry standard for serious storage, I should probably fix that. Thus, extra connectivity! I will be using the trial version of the Open-E VSA to start, as it can simulate an FC SAN. I think the FalconStor VSA does this as well. The quad FC HBA will go in the i7 ESXi host, and each of the i5 hosts will get a dual FC HBA. Of course, this brings me down from seven ethernet ports ...

Lab - Datastore & iSCSI config

Physical layout FastArray01 & FastArray02 Disks: 8x 2.5" 73GB 15k SAS (IBM) Card: PERC6i w. BBU kit (PCIe x8) RAID level: RAID10 FastArray01 size: 292GB (raw) FastArray02 size: 200GB (raw) FastArray03 size: 92GB (raw) LargeArray01 & LargeArray02 Disks: 4x 3.5" 1TB 7.2k SATA (WD) Card: PERC5i (PCIe x8) (will try to get BBU working) RAID level: RAID10 LargeArray01 size: 1500GB (raw) LargeArray02 size: 500GB (raw) Logical layout Storage service iSCSI provided over two Intel PRO1000MT quad NICs Local storage (FastArray03 & LargeArray02) for home VMs VLANs VLAN50 - for LabSite1 iSCSI storage VLAN60 - for LabSite2 iSCSI storage VLAN 200 - for 'home' iSCSI storage iSCSI notes iSCSI target for each array CHAP security VSAs to test Open-E DSS v6 (iSCSI and FC) HP Lefthand P4000 (iSCSI only) FalconStor NSS (iSCSI and FC) NetApp ONTAP8.0 EMC Celerra NexentaStor (FC plugin available) Starwind (via 2008 R2) Openfiler Solaris ZFS Tests to run SQLio Random/sequential tra...

Infrastructure

Thank the Lord the back door was open at the office. The building manager had emailed me Saturday informing me that my card would not work for the back door on the weekends. Thankfully he was incorrect! What he did not mention (I guess he deemed it irrelevant) was that the door between the lobby and rear hallway was locked (physical, not electronic), so the elevator was no longer an option. This meant carrying everything to the back door from the basement. It actually was not that bad, but I was doing some fine huffing and puffing by the end of it. Two-post rack Two rackmount shelves Two PDUs (1U and a 0U) Two SUA3000RM2U One SUA2200RM2U Three RBC43 packs from the aforementioned UPSes Not sure which of the UPSes I'll be putting to use. I have to tally up the equipment and see what kind of load I'm generating. Either way, I'm putting in a new circuit dedicated to the UPS - either 30A or 20A. Good thing the rest of our house is wired so awfully (our entire upper floor...

Lab - SAN results

Well, decided I had wasted enough time hemming and hawing on what to do for lab storage, so it's done. A VSA-style storage setup will offer more flexibility, but since I want this host to be somewhat permanent I've decided to go from just adding storage to two ESX hosts to a dedicated 'perma' ESX host. This host is spec'd with the following: Core i7-950/24GB/Supermicro X8SAX-O (lots of PCIe and PCI-X slots) iStarUSA D-400-6-ND case (6 external 5.25" slots) PC Power & Cooling 500w PSU 4x1TB WD Caviar Black (RAID10 for my larger datastore) 4x IcyDock MB994SP-4S SAS/SATA hotswap bays 2x PERC 6i RAID cards w. fanout cables and BBU kits 16x IBM 2.5" 73GB 15k SAS drives (two RAID10 arrays for the 'fast' datastores) 2x PRO1000MT quad NICs (PCI-X) This host will run my 'home' VM environment and will never be subject to wiping or messing about. Kind of a lab 'production' environment for VMware and Microsoft lab work. The lab 'lab...