Lab - SAN options part 2

I was doing some work today using Workstation backing onto the Synology, and MAN is it slow. Six VMs are running, but only two or three are actually using any disk (updating the template, and installing SBS on another). Now, this is of course running off of the Microsoft iSCSI initiator - no VMFS, so we'll let it slide until the lab is functional (picking up rack stuff on Sunday, cases/PSU arrive from Newegg early next week).

Here is another option that I have been toying with:

Use VSAs exclusively
There are a number of VSAs available, and it would make some sense to use it just for vendor experience.

With regards to hardware, I have a few options here as well. I can have each ESXi box host a local datastore that the VSA would then serve from. No idea on performance, but I would imagine if the back-end was fast enough the overhead could be mitigated somewhat.

Hardware
PERC 6i w. BBU -> SATA breakout cables -> IcyDock 4x2.5" hotswap bay(s) -> disks

The disks are yet another option: 2.5" SATA, SAS, or SSD In the following examples, I'm assuming the SAS has an 8 disk RAID10 and the SSD has a 4 disk RAID0. Sounds unbalanced, but here is the reason: The SAS setup needs the RAID10 and spindle count to match the SSD for performance (even then, probably will not be as fast). The SSD is a four-disk RAID0 to match the SAS setup for space. Even then, we're only talking 250-300GB datastores...not a huge amount of space, but it should be enough. To further add a point to the RAID0, if a disk fails it will give me some nice experience in recovering a datacenter from backups. Ok, lame point.

DiskOption: 73GB 2.5" 15k SAS
SubTotal: ~$2000 (if I remove the two spare disks, prices are identical)
Price per GB: $6.85
Est. cost per annum: $120
Raw array size: 292GB
Pro: Resilient to disk failure. Lots of spindles means potentially better sequential performance (good for vMotion?). ~40GB more space than the SSD array. Two spare drives included in price.
Con: Disks are used/older. Less IOPS than the SSD array.

DiskOption: 80GB SSD
SubTotal: ~$1885
Price per GB: $7.37
Est. cost per annum: $41
Raw array size: 256GB
Pro: Super duper fast (probably will max the RAID card and bus, not to mention the iSCSI VLAN). Only need one dock per ESX host. Will use less power than the SAS array. New drives.
Con: Cost per drive is high. RAID config necessary to gain space is not resilient to failure. SSD is not proven to last long in RAID.

Another downside to this configuration is that the boards I have purchased will require a PCI video card when configuring the RAID array. This is due to the x16 slot forcing 'PEG is here', disabling the on-board graphics. A bummer, as the on-board graphics was part of why I chose that board. At this point I am assuming it will work in the x16 slot...I will obviously test before I go through with this - I have a PERC 5i to test with.

Huge Openfiler or ZFS box
This is a flight of fancy, it really doesn't do what I want, but it's fun anyways.
  • Norco 4U chassis - 24 hotswap bays
  • Appropriate PSU (depends on availability of staggered spinup)
  • Two dual-NICs (PCIe?)
  • 24 WD 1TB 7200rpm
  • RAID can be RAID10, or broken up to two arrays of RAID10 (simulate multiple chassis)
  • HP SAS Expander & HP P400 card (actually allows up to 32 drives) & Mini-SAS cables
  • Core i7-950, 24GB RAM
  • Supermicro X8STE-O (lots of PCIe slots)
SubTotal: $4000
Price per GB: $0.16
Est. cost per annum: $245
Raw array size: 12TB
Pro: A lot of spindles, a lot of space. Very resilient to failures.
Con: A lot of power usage. More complicated methods of simulating multiple sites. Probably not as fast as a four-SSD RAID0. Dependence on Openfiler or ZFS to stay VMware-compatible.

The 4U Openfiler box is only really necessary if I were to be storing a LOT of data. And I'm not. But it was a fun mental exercise.

Conclusions
  1. SSD will be faster, but has a much greater chance of failure.
  2. SATA will provide a lot of space, but has a much higher initial investment and annual cost.
Well...SAS comes out on top of this one. But I'm still not convinced...it leaves me with several wildcards:
  1. VSA performance overhead
  2. VMware overhead
  3. Being locked down to storage on a host (i.e. anything happens to the host, array is gone).
There is no point going to Openfiler I think, because I would need two, and that means two more computers that could otherwise be hosts.


Which of course leaves the real option: Waiting.


:P

*Note: The difference in cost between the SAS and SSD is pretty significant! My calculations used 1w per SSD and 7.2w per SAS disk (a figure I found online stating that was the wattage during seek). $70/year is not insignificant...but I suppose that is dependent on how long the SSDs last in RAID usage.

*Note: The kWh estimate based on heavy usage, so is most likely exaggerated in this case.

Comments

Popular posts from this blog

Learning through failure - a keyboard creation journey

Canary deployments of IIS using Octopus & AWS

Learning Opportunities - Watching/listening list