I needed to migrate a vCenter Server between datacenters. A new IP address was required for the vCenter Server at the destination. The process of changing a vCenter Server’s IP address became a straightforward process in vSphere 6.5. However, to add a complication in my situation, I also needed to migrate the vCenter Server. Unfortunately, the destination network was not available at the source. Here are the steps I went through and then an issue I ran into. This was for a 6.7 vCenter Server appliance with an embedded PSC.
- Backup the vCenter Server
- Shutdown vCenter Server
- Clone vCenter Server
- This was only a failsafe if the vCenter Server does not work at the destination to avoid restoring from a backup
- Power on the vCenter Server
- Run and save an export of RVTools
- I always like to do this before big vCenter Server work so that I know where all my VMs are at
- If only using a vDS, verify you have a port group with ephemeral binding
- I did not need it, but you might depending on your destination
- Change the IP address of the vCenter Server
- The new IP address was displayed in the vCenter Server console
- Shutdown the vCenter Server
- Migrate the vCenter Server with VMware vCenter Converter Standalone to the destination ESXi host.
- Ensure to verify and/or make the following changes in Converter
- Required VM Hardware Version
- Change the DNS records for the vCenter Server
- Power on the vCenter Server
- Verify if the vCenter Server vNIC is connected
- Reboot the vCenter Server
- Delete the original and cloned vCenter Servers at the source
I thought everything went well. All ESXi hosts and VMs appeared to be happy. However, a user reported his remote console for some VMs would freeze after about 30-45 seconds and I also noticed some vDSs had sync issues. I did some research and found out about 90% of my hosts had the old vCenter Server IP address in vpxa.cfg.
VMware has KB1001493 which covers this issue. There are two methods to resolve the issue in the KB article. Every host with the issue needed to be touched so a lot of tedious work required for both methods. At first, I went with method 1 and tried it out on one host. The host was not responding after restarting the management agents and required to restart the vCenter Server service. I did not want to have all of my hosts not responding for a long period of time nor did I want to restart the vCenter Server service after restarting managements agents on each host. Therefore, I went with method 2.
Method 2 had more steps, but seemed to be cleaner. Essentially, each host is removed and added back to the vCenter Server one at a time so that seemed like a better approach. Hosts with a vDS required a little more work and documentation since a host will not have access to the vDS when removed. That meant to first put each host into maintenance mode before starting the first step. Then go through the steps and repeat on the next host. This greatly reduced the risk of VMs hitting any road bumps since VMs were not on a host that was being worked on. Then, of course, add each host back to the vDS. If no vDS and only standard switches, than no need to follow my extra step since network connectivity will be fine when the host is removed from the vCenter Server. Keep in mind performance data, permissions (depending what level they are set), VM placement in a folder, tags, and events/tasks (host level) are lost when removing a host from a vCenter Server. By the way, I did not do step 4 (Reinstall the VMware vCenter Server agent). The referenced KB article only mentioned up to 6.0, but seemed to worked well.
Perhaps, there’s a cleaner way to do this process. However, all things considered, the IP address change and migration went well. No outages and the desired outcome was achieved. I did two tasks at once so I, at first, thought that’s why I ran into the host issue. Though, I expect many people may have had this same issue, without combining the two tasks I did, since there is a KB article on how to resolve the issue.