More than 6TB for ESX4.0

Note: I was doing some cleanup of old posts, and noticed this was never published. Figured I'd throw the draft up anyways.

A bit of an issue with ESX4.0, but a little intro first.


I'm trying to run ESX vSphere on a home setup. An LSI 8408E (PERC 5i) SAS/SATA card with two SAS-SATA breakout cables connects to 8 WD 1TB 7.2k SATA disks. I also had one drive DOA initially, and have now had it replaced, but I'm still hearing sounds of head access that is too consistent for my liking.


They are configured as RAID5 with four logical disks, because the initial 6.4TB logical disk could not be used by ESX due to the limitation of 2TB per disk. After changing this configuration, ESX locked up at the 'checking filesystems' step of the boot process twice (only a few hard resets and mucking with the BIOS got it to clear finally). Now that I'm finally back into ESX with my new 2TB partitions, I cannot add them as disks in my storage configuration!


I get errors pertaining to partitioning when trying to add disks.


When I try to ssh in to the console, su - to root, and then run fdisk -l, it locks up. I waited a good 5+ minutes, but no change. I can only assume the disk controller is having issues. As an additional hardware note, I've re-flashed the card to the latest available ROM from LSI - upgraded from v5 to v7!


I rebooted the host in the hopes that the ssh commands would now work. It seemed to get stuck on the store-logs step of the boot process. After five minutes of this, I ctrl-alt-deleted for another try. Got past it this time, and I noticed briefly that it listed the names of the disks after the 'checking filesystems' step - sdf something in this case.


Ok, so I gave up on ssh and just used the physical console, since it was right next to me and all...fdisk -l gave a huge list of stuff instantly this time.


/dev/sda = 2199.0 GB - LSI 2TB partition

/dev/sdb = 400.5 GB - LSI leftovers partition

/dev/sdc = 37.0 GB - This is the boot 36GB Raptor used for the ESX install.

/dev/sdd = 2199.0 GB - LSI 2TB partition

/dev/sde = 2199.0 GB - LSI 2TB partition

/dev/sdf = 8095 MB - Not sure about this one...


At any rate, I've done the fdisk partition creation and rebooted - since it was not the console drive, it did not ask me to reboot, but another one could not hurt...right?


Upon reboot, the 'starting udev' step of the boot process took some time, as did the 'checking filesystems' step - presumably because of the 6.4TB-worth of new partitions.


I have to step out at this point, however. More to come.


...


Well, came back to the console, so must have just taken a bit longer than usual. I'm pretty sure something is up with my array at this point, because it was not this slow prior to the array being set up. Anyways...I digress.


The second link in my sources shows a correction for the error in the first link. It's not vmkfstool, but vmkfstools, as well as /devices, not /device. At any rate, using another command from the first page's comments (esxcfg-mpath -l), we can see what disks we're dealing with. For a clearer version of what this command essentially lists, under the MUI->configuration->storage->add storage->add disk, you can see a list of what disks/LUNs you have available. If you click on one and hit CTRL+C, you get this:


Local LSI Disk (naa.600605b000175dd01264d94cc2a2d019)

vmhba0:C0:T0:L0

0

2.00 TB


Hiyo! Something to put in our vmkfstools command!


[root@localhost disks]# vmkfstools -C vmfs7 /vmfs/devices/disks/naa.600605b000175dd01264d94cc2a2d019

Creating vmfs7 file system on "naa.600605b000175dd01264d94cc2a2d019" with blockSize 1048576 and volume label "none".

Usage: vmkfstools -C vmfs3 /vmfs/devices/disks/vml... or,

vmkfstools -C vmfs3 /vmfs/devices/disks/naa... or,

vmkfstools -C vmfs3 /vmfs/devices/disks/mpx.vmhbaA:T:L:P

Error: Permission denied


(I tried the vmhba:C0:T0:L0 in place of naa, but it errors out)


Okay...permission denied even though we're logged in under a valid user and we have elevated to su -. Weird. Maybe it just doesn't like vmfs7. Tried again - nope, still denied (turns out vmfs3 and 7 are really differentiated when creating vmdks, not the bare filesystem). Alright, man vmkfstools. I'll add in -v for verbose.











I'm still getting the 'tpm ... failed to load tpm' error on booting ESX as well... something else to troubleshoot and fix at some point. ( http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1011452 )


Sources:


Comments

Popular posts from this blog

Learning through failure - a keyboard creation journey

Canary deployments of IIS using Octopus & AWS

Learning Opportunities - Watching/listening list