Homelab vSphere/ ESXi Setup

As part of my Systems Administrator day job, I manage several clusters of vCenter/ESXi hosts to run our server environment. This virtualized environment allows us to run and manage various workloads, and I wanted to create the same experience at home. To that end, by using the VMUG program, I was able to get all the licensing needed to bring enterprise software into my home lab environment. The rest of this post is devoted to the hardware setup and thoughts behind the lab, as there are many sources about how to set it up, and I’ve included a few links with good information.
I had two primary purposes in creating a home lab and choosing a VMware-based setup. First and foremost, it provides a learning opportunity to further my career, and it proved highly useful while preparing for the VCP certification. The computing power around the house also allows me to host other services and experiment with new technologies in nonwork/nonproduction environments. Technologies I’m currently experimenting with include Docker, Kubernetes, Zabbix, and deeper knowledge of Linux-based servers. Finally, I also use my cluster to deliver services to the home, such as network monitoring (Zabbix), ad-blocking/ filtering (redundant containerized Pi-Hole), DDNS anchoring, and media streaming.
Design Ideas and Constraints
-Cost-effective — Use repurposed hardware where possible; the entire setup cost under $1000, mainly in the Asustor NAS (all new)
-Quite and Energy Efficient — It is quiet and draws ~200w according to the UPS with both NASs and all three nodes running. This also leads me to stay away from used server hardware due to the noise and power required.
-3-Tier Architecture — Mirror my office environment using storage external from the server, allowing for easy migration between servers
-Reliable — Stick to known vendors and design for redundancy, if possible
Gear List
NAS
Synology DS220+ (Pre-existing) — SMB Storage for general home stuff
Asustor AS5402T — Primary Storage for the Cluster
- 2x 4TB Seagate Iron Wolf (Pulled from Synology) — NFS Storage
- 2x 2TB Crucial P5 NVMe — iSCSI target
- 2x 500GB Crucial P5 NVMe — NFS Cache
ESXi Hosts
3x Lenovo M720s SFF w/ i5–8500, upgraded to 32GB of Ram
Network
- Unifi 16-Port Switch (Pre-existing) — Main network/ server connection
- TP-Link TL-SG105-M2–2.5G switch for iSCSI storage
- VMUG Subscription
Setup/ System Layout

When designing the system, I utilized a 3-Tier virtualization architecture(compute, network, and storage) to mimic my work environment. The 3 Lenovo hosts act as the compute for the ESXi cluster controlled by a vCenter server VM inside the cluster. Networking is split across a storage network and general computing/ access network. The storage network operates on a 2.5-gigabit segmented network for the iSCSI traffic with jumbo frames enabled. I had to add PCIe cards to the Lenovos, so I selected the small towers rather than minis. All other network traffic (control, vMotion, and machine) passes over the Lenovo’s integrated NICs and uses internal vSwitches inside vCenter. On the storage side, the Asustor provides the primary storage allocated to the cluster. Inside are 4 NVME drives; 2 2TB drives are devoted to fast iSCI storage for VMs on the cluster, while the remaining 2 are a cache for the 4TB spinning drives as an NFS volume used for additional storage. The Synology is not part of the virtualization environment, though it does provide SMB shares to the rest of my network for general file/ video sharing.
Functionality/ Results
Overall, everything worked as expected. I can host VMs and services as expected with all the DRS bells and whistles on VMware. Having a setup similar to work allows me to experiment with things before trying them in a professional setting. Furthermore, I can easily create VMs for one-off tests and leave them running for extended tests without tying down my laptop. I have some project work upcoming involving HyperV and plan to create nested servers for testing; this should turn into a further blog post when completed as well. Finally, I credit this setup with helping to round and cement the knowledge that was important in helping me pass the VCP-DCV exam.
Future Plans
With the change in ownership of VMWare to Broadcom, I will be evaluating changing to a different hypervisor. Based on the current
market and requirements for work, the first place I plan to look is XCP-NG, and I will likely create a future post about migration to the new platform.
Comments
Post a Comment