Posts

On servers and racks and stuff...

For goodness sake order a DVD-ROM in the least. We have several 2950s and 2850s that have CD-ROM drives and are utterly useless for ESX installs. I'm now praying the DRAC card's ISO mount works. If not, big trouble!! Also, a few other points off the top of my head: Order less DIMMs, but higher capacity, even if you have no plans to expand. It will happen, and you will throw out a ton of 512MB/1GB/2GB sticks of FB-DIMMs that nobody else wants for this very reason. Order a ILO/DRAC/etc card. Just do it. It will save your bottom later. Only use an x64 OS if you're doing a native install, otherwise use ESX or something similar. Just doesn't make sense to have single-purpose servers in this day and age unless you're talking about an absolute monster of a SQL server or something similar. Strongly consider an expansion NIC card (dual port at least) if you do not have Intel NICs on-board. Just a good redundancy strategy. Don't skimp on PDUs in your server rack -...

ESXi storage migration

Figured I'd post something useful about this experience. Really, it's all about planning. Not just having a plan, or even having a plan that covers the basics, but having a plan that allows for practical things, like time off, shipping errors, configuration oversights, and just plain bad luck. We've had all of the above and then some on this latest project. No matter how hard we tried to make our deadlines, we either forgot a key point (ESXi doesn't do storage migration on its own), or were given misinformation or not enough information (ET RS-16 IP-4 doesn't handle jumbo frames including/past 8000), or just plain bad luck abounded (the crux of the project lay across a perfect storm of not one but three long weekends (Christmas, Boxing day, New Years) compounded by different locations having different ideas about which days to take off. I've spent my New Years Eve (well, 95% of it) on one task: Migrate the storage of all the VMs on our NY site's ESXi box fro...

MOSS on a fresh 2008 R2 sever - error on install

Trying to install MOSS on an 2008 R2 server, fresh installs each. Got a "This program is blocked due to compatibility issues" with references to KB article 962935. Here's the fix, courtesy of Thibault B. Slipstreaming SharePoint SP2 into SharePoint 2007 Server with SP1 does the job when trying to install on a fresh Windows Server 2008 R2. Basically all you have to do is the following: Extract OfficeServerwithSP1.exe by executing "OfficeServerwithSP1.exe /extract:C:\yourpathsp1\" Extract officeserver2007sp2-kb953334-x64-fullfile-en-us by executing "officeserver2007sp2-kb953334-x64-fullfile-en-us /extract:C:\yourpathsp2\" Copying all files from "C:\yourpathsp2\" to "C:\yourpathsp1\Updates\"It worked like a charm for me :) From here: http://social.technet.microsoft.com/Forums/en/sharepointgeneral/thread/c7091bda-867e-49a1-9bc8-c2ef847c92e7

Neat trick when making outside IP changes

The guy I'm working with for our router changes has a neat trick for insurance against a bad IP change. Run this command prior to making the IP change on your internet-side interface: reload -n XX (where XX is the amount of time to hold the reload) Then run your IP change commands. If you make a mistake, or have bad info, the router reboots itself after XX time, clearing the bad commands! Clever use of the reload command - there's something you wouldn't learn in school!

Clean-up follow-up

...too many ups... Did a massive amount of clean-up yesterday, and the office looks great now! I've ended up moving the monitors further down the desk, and setting the PC up so that it's easy to un-plug/tidy cables. The UPS is now on the desk next to it as well, leaving nothing on the floor. The fourth monitor has also been removed, for now, as it was just used for a second PC. I'm going to P2V the BSD box, if possible, so that we don't have another PC sitting around NOT running ESX. There will be two spare PCs if we can get the BSD box virtualized. Tonight I'm going to clean up all the spare parts, organize them, and label the boxes, as well as take a quick inventory so that I can look into getting rid of some of it. Of course I also need to finish up the basement wall...going to be another late night! Oh yeah...I've now also got to build a shelf for the computers...or pile them all up together. Update: I've got most of the stuff cleared out of the office ...

Where to put it all?

I was telling my wife the other day how back in college, or earlier, I would have killed for the setup I'm using now - only now, I'm finding it extremely cumbersome and prone to making my desk very unusable. The desk measures 6x3' and is crammed to the gills with stuff - NICs, RAM, the odd HSF, headphones, CD spindles, screws & drivers, papers, four monitors, speakers, two 8-port switches, wireless router, two keyboards w. mice, and cables upon cables. No matter how often I clean it off, it gets stacked with stuff again in no time. I think the real issue is that I'm just not organized, and have no place to put things even if I was organized. The other issue is that there's a lot of junk! It spills over onto the floor, under and around the desk (where the two APC 1200VA UPSes live). This drives my wife crazy. Actually...I think all of it drives her crazy. I should do something about that...if for nothing else than to make her life a little more sane! It do...

New home server environment

Well, with the days off I took, plus a few extra bits of hardware, things are shaping up nicely. Hyper-V server Motherboard: Asus P5E-VM HDMI LGA775 RAM: 2x2GB & 2x1GB Mushkin PC2 CPU: Intel E6400 2.13GHz Disks: 160GB Seagate SATA for boot, 8x1TB WD Black in RAID5 Cards: Dell-branded LSI 8408 Dual SAS w. SAS-SATA breakout cables, Intel PRO/1000 CT nic Running 2008 R2 x64 with Hyper-V and the Starwind iSCSI target. The RAID5 array is broken down into three 1.8TB and one 1TB logical disks, the former being used as VMFS iSCSI datastores and the latter using NFS and SMB shares for ISOs, files, etc. The VHDs for the Hyper-V part of the server are also stored on this disk. The Starwind was super-easy to set up, only hitch I ran into was that I needed to restart the Starwind/iSCSI initiator services to get ESX seeing the LUNs. Performance is 'just fine' so far - nothing empirical to report, yet. The on-board nic is used for LAN access, while the CT card is used for iSCSI,...