Home Lab Hardware 2.0 – New Hosts
While working on some projects in my home lab recently, I’ve begun encountering resource shortages. Many of these shortages stem from my original intention to utilize repurposed desktop hardware. Although my lab no longer uses VMware, I still utilize it for work and read content, such as William Lam’s blog. While browsing his VCF9 content, I began to notice some relatively inexpensive machines that would meet my needs by providing more CPU and RAM for my workloads. The Minisforum MS-A2 immediately came to consideration due to its 16-core/32-thread AMD processor and the ability to support up to 96GB of RAM. However, shipping is relatively delayed, and I don’t need the smallest form factor available. Further research led me to the Minisforum 795S7, which has similar specifications but in a larger case.
The 795S7 fits exactly what I was looking for in a lab machine. The processor and RAM mentioned above enable me to reduce the required number of hosts from 3 to 2, while still achieving greater power. Storage-wise, they have 2 NVMe slots each. Since I use a NAS for storage, the hosts only needed 1 256GB drive each. In the future, if I try to go back to a VMware stack, I could populate the second slot to try NVMe tiering. On the network side, the open PCI-e slot allowed me to swap in the 2.5G cards from my old hosts, enabling a second NIC for the storage network. This will also be important for a potential VMware swap, as the onboard hardware is Realtek, which lacks compatibility.
Host Hardware BOM (w/ Amazon Affiliate Links)
Racking and Network Topology
Reducing the number of hosts in the network allowed me to reduce the size required for my lab. I ended up opening my laser software and designing a rack from 6mm MDF sheets, which I then spray-painted for the hosts to sit in, with both NAS devices sitting on top. The rack creates a bit of a hot/cold aisle situation, as it vents warm air out the back. Swapping the network connections was a one-for-one. Here is a diagram based on current connections-
Installation Process
To accomplish the migration, I took the following steps-
- Power down all VMs except the XO appliance machine
- Migrate all VMs to host 3 as this IP address would no longer be in use
- Change the pool master to host 3
- Power down and remove hosts 1 & 2 from the pool
- Install XCP-NG on the new hosts*
- Add the new hosts to the XO server
- Create a new pool for the hosts to reside in
- Migrate VMs onto the new pool*
- Power down and remove remaining old host from pool
- Pull drives and e-waste all hosts
Migration Notes and Wrap Up
The migration went relatively well, with a few notes and a complaint about XCP-ng. First, while prepping the new hosts, I had to work on some host UEFI issues-
- Disable host secure boot - Setup > Security > Secure Boot > Set to "Disabled"
- Disable the host Global C-State Control - Setup > Advanced > AMD CBS > CPU Common Options > Global C-State Control set to "Disabled"
The next issue I encountered was with the migration of VMs. It was significantly slower than I would have liked, and I don’t have a great answer as to how to improve it. I was able to use the internal XO VM for all of them, except that machine. To migrate it, I ended up running a warm migration from the appliance installed in my Windows Subsystem for Linux (WSL) instance. This still ran slower than I had expected, but it ended up working. I also attempted to reattach disks; however, XCP does not store an equivalent of the a VMX file in the data store, meaning that to restore VMs with only the disk, you have to recreate them and attach the old disk. Overall, aside from taking longer than I would have liked, the process was straightforward, and I completed it in an afternoon.
Comments
Post a Comment