Openfiler project

As part of the lab rebuild, I am setting up an Openfiler box.


Some features I am using:
  1. iSCSI block-level storage with path redundancy
  2. FC block-level storage
  3. Block-level replication (potentially have a 2nd host)
As I've been setting it up, I'm realizing that it uses LVM for storage which is kinda nice as it dovetails into what I've learned from the clusters at work.  Also realized that fiber channel has way more curb appeal than iSCSI.  I've not had a chance to integrate the FC switch yet, but that's on the list.  What I have done is document (thanks to the internet and some testing) the steps involved in setting up/giving new access to the target and new servers.  For the price, I'd say you can't go wrong with this setup for a home lab.
  • Openfiler has a 4-port FC card ($80 per card)
  • Each ESXi server has a 2-port FC card ($40/per card)
  • FC cables from monoprice ($10/each)
Now, the motherboards I am using in the ESXi servers do not have PCI-X slots, but being bus-limited to 133MB/s is probably the least of my worries.  The Openfiler box has 133mhz PCI-X (~1050MB/s) slots, so no loss there.  The alternative would be to use iSCSI MPIO, but I'm still limited to PCI slot speeds, as the other two slots on the ESXi boards are still PCI (two Intel PRO1000 dual port cards).

At any rate, I am running both FC and iSCSI from the Openfiler (SAS over FC, SATA over iSCSI), so there are two methods of accessing VMFS datastores.

The wiki has been updated with my install documentation - more to come......

Comments

Popular posts from this blog

DFSR - eventid 4312 - replication just won't work

Fixing duplicate SPNs (service principal name)

Logstash to Nagios - alerting based on Windows Event ID