Skip to main content

Posts

Showing posts from March, 2011

Making a case for a purchase to management

An issue I've come across many times now is how to properly present a purchase request to your manager/his boss. There are occasional managers who will just take your word for it, but some (hopefully most) will question the need for it. As a technical person, I find it really easy to get caught up in what something can do rather than why we need it in the first place.

Case in point.

An upgrade was required for some KVM servers - more RAM required - and I put through the request for purchase to my superior. He sent it over to the director, who immediately shot back an email requiring a really nice reason to approve it. After some futzing about via email (I was out of the office that day), it became clear that he was fine approving it, but we needed to make the case as to WHY he should approve it. Frankly we needed to prove it to ourselves first, but he was technically-competent and was not just going to stamp everything 'a-ok'.

There was a clear reason to me and my co-wor…

Lab setup 101

Lab setup
I finally finished the cabling and BIOS finagling on the main ESXi box. Trying to run 20 SATA cables neatly in a small space is tricky business, I tell you, but in the end it turned out clean enough that the wiring should not restrict air flow too much.

The motherboard is a Supermicro X8SAX. RAM is Kingston ValueRAM (6x4GB). CPU is an i7-950. For storage I have two PERC 6i cards and one PERC 5i (actually an LSI8408E). The 6i cards each have eight 2.5" 15k 73GB SAS drives in RAID10 and the 5i has four 1TB 7.2k SATA drives in RAID10. I will need to revisit the cooling inside, as the RAID cards get quite hot. Right now I've slung a fan behind them blowing out of the case.

ESXi is running off of some Verbatim USB drives I got from newegg ('clip-it' model). I have just finished 'Maximum vSphere' by Eric Siebert and he gives a really cool method of creating the OS on these drives. Instead of installing manually, you use Workstation, connect the USB d…

Lab update

Criminy, I've never had so many shipping boxes arrive in the course of a week. Cables are out for delivery today, I'm hoping we can get home in time to catch the UPS driver on his last round, as my wife did yesterday for the SAS drives.

Speaking of which...the 2.5" SAS drives are SOLID little things. Lovely. I got them all mounted in the IcyDock enclosures and only broke one LED indicator...the manufacturing on the drive sleds is not perfect - but for the price, I'm not complaining. I will send them a note to see if I can order a spare sled (I have two spare drives after all).

Support chat was very helpful - I can do an online RMA for the unit. Also, they provided me with the link for an individual tray: http://www.icydock.com/goods.php?id=125 Unfortunately the only place to offer it is amazon.com (does not ship to Canada) and one tray is $40. I'll pass!! Five or ten bucks would be ok...but $40 is nuts. Have to see about the RMA then.

I need to check and se…

Lab - Bright news! FC addition

Update: More Arstechnica goodness! Brocade provides free e-learning: http://www.brocade.com/education/course-schedule/index.page

See Server Room thread here: http://arstechnica.com/civis/viewtopic.php?f=21&t=1138681&p=21409635#p21409635

Thanks to the good people in TheServerRoom@arstechnica, the lab has a wonderful new addition - FC! I have managed to acquire the following for a very low price:
Brocade 3800 FC switch (2Gb, 16-port w. SFPs, all licensing)QLogic QLA2344/2342 HBAs (1 quad/4 dual, PCI-X)Very exciting! I have very little hands-on experience with FC, and seeing as how it's an industry standard for serious storage, I should probably fix that. Thus, extra connectivity!

I will be using the trial version of the Open-E VSA to start, as it can simulate an FC SAN. I think the FalconStor VSA does this as well. The quad FC HBA will go in the i7 ESXi host, and each of the i5 hosts will get a dual FC HBA. Of course, this brings me down from seven ethernet ports per h…

Lab - Datastore & iSCSI config

Physical layout
FastArray01 & FastArray02
Disks: 8x 2.5" 73GB 15k SAS (IBM)
Card: PERC6i w. BBU kit (PCIe x8)
RAID level: RAID10FastArray01 size: 292GB (raw)FastArray02 size: 200GB (raw)
FastArray03 size: 92GB (raw)
LargeArray01 & LargeArray02
Disks: 4x 3.5" 1TB 7.2k SATA (WD)Card: PERC5i (PCIe x8) (will try to get BBU working)RAID level: RAID10LargeArray01 size: 1500GB (raw)LargeArray02 size: 500GB (raw)
Logical layout
Storage service
iSCSI provided over two Intel PRO1000MT quad NICsLocal storage (FastArray03 & LargeArray02) for home VMs
VLANs
VLAN50 - for LabSite1 iSCSI storageVLAN60 - for LabSite2 iSCSI storageVLAN 200 - for 'home' iSCSI storage
iSCSI notes
iSCSI target for each arrayCHAP security
VSAs to test
Open-E DSS v6 (iSCSI and FC)HP Lefthand P4000 (iSCSI only)
FalconStor NSS (iSCSI and FC)
NetApp ONTAP8.0EMC CelerraNexentaStor (FC plugin available)
Starwind (via 2008 R2)OpenfilerSolaris ZFSTests to run
SQLioRandom/sequential transfer speeds?

Infrastructure

Thank the Lord the back door was open at the office. The building manager had emailed me Saturday informing me that my card would not work for the back door on the weekends. Thankfully he was incorrect! What he did not mention (I guess he deemed it irrelevant) was that the door between the lobby and rear hallway was locked (physical, not electronic), so the elevator was no longer an option. This meant carrying everything to the back door from the basement. It actually was not that bad, but I was doing some fine huffing and puffing by the end of it.
Two-post rackTwo rackmount shelvesTwo PDUs (1U and a 0U)Two SUA3000RM2UOne SUA2200RM2UThree RBC43 packs from the aforementioned UPSesNot sure which of the UPSes I'll be putting to use. I have to tally up the equipment and see what kind of load I'm generating. Either way, I'm putting in a new circuit dedicated to the UPS - either 30A or 20A. Good thing the rest of our house is wired so awfully (our entire upper floor is on…

Lab - SAN results

Well, decided I had wasted enough time hemming and hawing on what to do for lab storage, so it's done.

A VSA-style storage setup will offer more flexibility, but since I want this host to be somewhat permanent I've decided to go from just adding storage to two ESX hosts to a dedicated 'perma' ESX host. This host is spec'd with the following:
Core i7-950/24GB/Supermicro X8SAX-O (lots of PCIe and PCI-X slots)iStarUSA D-400-6-ND case (6 external 5.25" slots)PC Power & Cooling 500w PSU4x1TB WD Caviar Black (RAID10 for my larger datastore)
4x IcyDock MB994SP-4S SAS/SATA hotswap bays2x PERC 6i RAID cards w. fanout cables and BBU kits
16x IBM 2.5" 73GB 15k SAS drives (two RAID10 arrays for the 'fast' datastores)
2x PRO1000MT quad NICs (PCI-X)This host will run my 'home' VM environment and will never be subject to wiping or messing about. Kind of a lab 'production' environment for VMware and Microsoft lab work. The lab 'lab' envi…

Lab - SAN options part 3

I've realized that the last two entries on this don't really explain what I need and why. So here goes.

The next few years I will be heavily dependent on having a lab at home for study purposes, 'just messing about', and for my part-time work.

Thus, while I now have hosts for virtual machines, handling the RAM, CPU, NIC duties, I now need some back-end block-level storage (NFS can be done in a VM).

Requirements
Decent IOPS & throughput
iSCSI connectivity with MPIO being a 'nice-to-have'
Allowing for separation between 'sites/datacenters'250/500/1000GB of space (min/ok/max)Resilience of some manner to disk failureCost no more than $2500 all-inBe reasonably power-efficient
A few notes on the above.
MPIO would be nice as it is a common thing requiring advanced configuration in ESX deployments, it would also speed things up quite a bit.iSCSI connectivity is mandatory, as FC would cost way too much (yes I just spent the last 20 minutes researching 'or is …

Lab - SAN options part 2

I was doing some work today using Workstation backing onto the Synology, and MAN is it slow. Six VMs are running, but only two or three are actually using any disk (updating the template, and installing SBS on another). Now, this is of course running off of the Microsoft iSCSI initiator - no VMFS, so we'll let it slide until the lab is functional (picking up rack stuff on Sunday, cases/PSU arrive from Newegg early next week).

Here is another option that I have been toying with:

Use VSAs exclusively
There are a number of VSAs available, and it would make some sense to use it just for vendor experience.

With regards to hardware, I have a few options here as well. I can have each ESXi box host a local datastore that the VSA would then serve from. No idea on performance, but I would imagine if the back-end was fast enough the overhead could be mitigated somewhat.

Hardware
PERC 6i w. BBU -> SATA breakout cables -> IcyDock 4x2.5" hotswap bay(s) -> disks

The disks are yet anot…

Lab - SAN options

Update: Going to wait and see what the next few months brings for new products. Hesitant to purchase the Iomega units as they are several years old now.

I had a sobering thought this afternoon. My entire VMware lab is basing iSCSI SAN usage off of our Synology DS410j. The same NAS that houses all of our pictures, data, etc. If my lab kills it, my wife kills me. (and frankly I'd be super-upset as well) We do have backups (that I need to test) done to an external, but it would be a giant pain if it broke.

Thus, I consider my options.

Dedicated Openfiler box(~$1800)
This would be a stand alone machine running the latest version of Openfiler. Dual RAID cards with 8 spindles each, two dual NICs for MPIO to two locations, thus mimicking two SANs.

Pro: Does MPIO, high spindle count.
Con: Cannot test SAN replication. Power hippo.

Two NAS that support iSCSI(~$2200)
Basically buy two DS411+ and fill with 1TB or 2TB drives.

Pro: Simple to configure/maintain. Can do synchronization between u…

Groups not resolving - final solution

Let's go through this without any assumptions (or try to, anyways).

Problem
Users in the parent domain send an email to a group (whose users are all in the parent domain). Email does not come through, no errors, no indication to user that the email has failed. If the email is also addressed to individuals (internal or external), those individuals receive the email.

On the Exchange side, the message log for an email sent only to a group stops at the 'Categorizer' step. For an email sent to a group and individuals the log completes for the individuals, but the group is not logged past the Categorizer stage.

Environment
Exchange 2003, AD at the 2008 functional level. Exchange 2010 domain preparation has been performed. Clients using WindowsXP/7 and Outlook 2007/2010. Exchange server is hosted in the parent domain. Child domain connects to that server.

Exchange 2003 machine has Directory Access set to automatically discover DCs, and sees all the DCs in the domain (5 child dom…