Lab - SAN options part 3

I've realized that the last two entries on this don't really explain what I need and why. So here goes.

The next few years I will be heavily dependent on having a lab at home for study purposes, 'just messing about', and for my part-time work.

Thus, while I now have hosts for virtual machines, handling the RAM, CPU, NIC duties, I now need some back-end block-level storage (NFS can be done in a VM).

Requirements
  • Decent IOPS & throughput
  • iSCSI connectivity with MPIO being a 'nice-to-have'
  • Allowing for separation between 'sites/datacenters'
  • 250/500/1000GB of space (min/ok/max)
  • Resilience of some manner to disk failure
  • Cost no more than $2500 all-in
  • Be reasonably power-efficient
A few notes on the above.
  • MPIO would be nice as it is a common thing requiring advanced configuration in ESX deployments, it would also speed things up quite a bit.
  • iSCSI connectivity is mandatory, as FC would cost way too much (yes I just spent the last 20 minutes researching 'or is it...?' and yes...it will*).
  • I am most likely asking for the impossible...or what I really want is a SAN.
*FC may be possible by purchasing HBAs and a switch, then attaching the HBA to a VSA. Definitely going to look into this.

Yup...next step is to find out if the VSA can see an FC HBA and put it to use. If that is possible, there is absolutely no reason why I should go with anything other than VSAs. However, I may decide to go with a dedicated VSA box - none of my boards have PCI-X, and all the HBAs I'd be looking at would require PCI-X.

Comments

Popular posts from this blog

Learning through failure - a keyboard creation journey

Learning Opportunities - Watching/listening list

DFSR - eventid 4312 - replication just won't work