New lab setup

Update:
My employer has kindly allowed me to take some APC UPSes off their hands, a two-post rack with 1U PDU and some shelves, and a zero-U PDU (which is on loan). I can take two of either a 3U 3000VA or 2U 3000VA. Just need fresh batteries. I wonder what they utilize for normal operation... The rack is also perfect...it has a shelf-type bottom so I can mount heavy long stuff from the bottom up.

We've decided to invest in a lab for certs and training. Immediately it will be my VCP lab, and afterwards for VCAP and other MS certifications (not to mention test environment/learning lab for new tech).

Key items:

  • Firewall (Astaro) with multiple NICs for DMZ, etc.
  • Switch upgrade (from 8 port to 24 port Gb)
  • Three ESX hosts (i5, 16GB)
  • The NICs and switches I'm sourcing from Ebay/local.

Infrastructure
My employer has also graciously offered to donate a two post rack for the lab, just need to get it home and mounted to the floor. All will be mounted on said rack in the basement. Currently I have all my network gear on a 1200VA APC UPS down there...will keep it all through that until I can justify a racked UPS.

Firewall
The firewall is a Supermicro SYS-5015A-EHF-D525, putting 2x2GB RAM, a dual-port PCIe PRO1000PT NIC, and a spare laptop drive into it (also got two sub-20db fans for cooling). Going to run Astaro v8. Total of four NICs (all Intel-based).

Astaro is free for home use. I've used it in a production environment before, and it's a very nice mix of price/performance. Astaro home/free are the same thing, just business use requires licensing.

Networking
I will keep the 8-port switch as it's rack-mounted already, and you can never have enough ports! Going with a Linksys 48-port (SRW2048). The 8-port may be retained if necessary.

Since each host has 7 NICs, I'll need 21 NICs just for the hosts, not to mention other network items like the Synology. Going with the 48-port will give me greater flexibility and room for expansion.

ESX hosts
The ESX hosts are spec'd something like this:
  • 3U iStarUSA D-300-PFS case (thanks to Chad Sakac!)
  • Upgraded fans (Vantec variable-speed, 2x60mm, 1x80mm)
  • Core i5-760 (quad-core w. VT)
  • Crucial 4x4GB RAM (CT2KIT51264BA1339)
  • USB flash drive (ESXi)
  • Intel motherboard w. on-board DVI and Intel chipset NIC (BOXDH55HC)
  • 3x PRO1000MT dual-port NICs (total of 7 NICs per host, all Intel)
  • Seasonic S12II 380w PSU

Storage

My trusty Synology DS410j will do iSCSI duties until it can't keep up, then I'll look at options.

Alternatives include a lightweight server with a many-spindle RAID10 running Starwind or the Synology DS411+ (approx 4x the iSCSI performance of the DS410j). I think if I upgrade down the line I'll want to move to MPIO (so at least two ports). Not for performance requirements, but for having that option to configure in my lab.

Comments

Post a Comment

Popular posts from this blog

Learning through failure - a keyboard creation journey

Canary deployments of IIS using Octopus & AWS

Learning Opportunities - Watching/listening list