vSAN

Change IP ESXi for vSAN host with VDS

Change IP ESXi for vSAN host with VDS. A step-by-step walkthrough of changing the IP address of a vSAN-enabled ESXi host with VDS switch

I am doing a bit of network redesign in the home lab (a new post to come on that soon), for better manageability and security practices. With that being said, I had the need to move ESXi management IPs to a new subnet. I have a few VMware ESXi hosts that are outfitted with distributed switches and of course, running in vSphere vSAN clusters. I wanted to put together a quick post of the steps used to perform the change IP ESXi task for a vSAN host running with VDS.

Steps to Change IP ESXi FOR vSAN host

The steps are fairly straightforward to change IP ESXi. There are a few things to be aware of. It does make things a bit more tedious if a host is a member of a vSAN host and if it is running a vSphere Distributed Switch. These are the steps I used to change the IP for VMware ESXi that was running as part of a vSAN host.

  1. Place the host in maintenance mode
  2. Move the host out of the vSAN-enabled cluster
  3. Remove VDS (maybe)
  4. Change the management IP from the DCUI
  5. Remove the ESXi host from your vSphere inventory
  6. Add the host back to your vSphere inventory
  7. Reconcile VDS
  8. Move the host back into the vSAN cluster
  9. Check your vSAN partitions and networking
  10. Take the host out of maintenance mode

1. Place the host in maintenance mode

The first obvious step is to place the host in maintenance mode and make sure all workloads have been evacuated. With vSAN, choose your data evacuation option that suits your needs.

Place the VMware vSAN host in maintenance mode
Place the VMware vSAN host in maintenance mode

The VMware ESXi vSAN host is placed in maintenance mode with all the workloads migrated off.

VMware vSAN host is successfully placed in maintenance mode
VMware vSAN host is successfully placed in maintenance mode

2. Move the host out of the vSAN-enabled cluster

The next step is to move the ESXi host out of the vSAN-enabled cluster. This effectively disables vSAN for the host. You need to perform this step as you will run into issues if you just remove the host with vSAN enabled from vSphere altogether, change the IP and then bring it back in.

Move the host out of the vSAN enabled cluster
Move the host out of the vSAN enabled cluster

3. Remove VDS (maybe)

One sticky part of the vSphere Distributed Switch that can be a bear is removing the switch from the host altogether. You may run into issues with “resources in use” when trying to remove the switch from the host as shown below.

Errors with VDS resources in use
Errors with VDS resources in use

With my lab environment, I found it to be effective to leave the VDS intact on the ESXi host, remove it from your vSphere inventory, change the IP, bring it back into inventory, and then reconcile your VDS switch as I will show in just a bit.

4. Change the management IP from the DCUI

Open the DCUI and select Configure Management Network.

Login to the DCUI
Login to the DCUI

Choose IPv4 Configuration.

Navigate to IPv4 configuration
Navigate to IPv4 configuration

Change the IP to what you want it to be.

Change the IP to the target IP address you want to configure
Change the IP to the target IP address you want to configure

Apply the changes and restart the management network on your ESXi host.

Apply and restart the management network
Apply and restart the management network

Verify you have the proper IP address reflected.

View the newly configured IP address
View the newly configured IP address

5. Remove the ESXi host from your vSphere inventory

The only way to remove your VMware ESXi host from vSphere inventory with connected VDS resources is to have the host in a disconnected state. After changing the IP address of the host, it should shortly go to disconnected. This will allow you to remove it from the vSphere inventory. As a note, the screenshot below shows a different host I removed from inventory as I rolled through the cluster. However, I used the same process throughout.

Remove your not responding host from vSphere inventory
Remove your not responding host from vSphere inventory

6. Add the host back to your vSphere inventory

Now that we have the VMware ESXi host’s IP address changed, we can add it back to the vSphere inventory.

Add the host back to the vSphere datacenter
Add the host back to the vSphere datacenter

7. Reconcile VDS

After you add the ESXi host back to your vSphere inventory, you will need to reconcile your VDS switch. In my environment, I found that simply adding the switch back to the changed IP host correctly resolves VDS host proxy switch issues. You may see the below when the host is added back to vSphere inventory.

VDS proxy switch out of sync warning
VDS proxy switch out of sync warning

To reconcile VDS back to the host properly, you just need to add the VDS back to the host. This will reassociate the VDS proxy switch, etc.

Add the host back to the VDS switch
Add the host back to the VDS switch

You will need to buzz through and make sure you have your uplinks and port groups assigned.

Map uplinks and VMkernel ports
Map uplinks and VMkernel ports

8. Move the host back into the vSAN cluster

Once you have your VDS switch synchronized back to the ESXi host, you can move it back into the vSAN-enabled cluster.

Move the host back into the vSAN cluster
Move the host back into the vSAN cluster

9. Check your vSAN partitions and networking

I like to double-check the vSAN partitions and information using the command:

esxcli vsan cluster get
Check your vSAN sub cluster UUID
Check your vSAN sub-cluster UUID

10. Take the host out of maintenance moe

Now, we can take the host out of maintenance mode.

Host still in maintenance mode
Host still in maintenance mode

You should see any red bangs go away as the data is synchronized and your networking and everything else is like it should be.

Maintenance mode exited and cluster data resynchronized
Maintenance mode exited and cluster data unsynchronized

Troubleshooting

I did run into a couple of issues that were self-inflicted of sorts. This included that I didn’t have the proper firewall rules in place between my firewall zones to allow the traffic. Make sure if vCenter Server is on a separate subnet (for me it was for a while since I changed IPs of ESXi hosts first and then reIP’ed the vCenter Server), you have the proper rules in place to allow the traffic.

Firewall blocking traffic between vCenter Server and the ESXi host
Firewall blocking traffic between vCenter Server and the ESXi host

Another issue, on one of the hosts I wasn’t paying attention as I was performing other lab tasks and I removed the host from vSphere inventory without first taking it out of the vSAN cluster. This resulted in a cluster partition on the host, even though I had network connectivity between the host and vCenter. I had to leave the cluster once again and bring the host back into the vSAN-enabled cluster to rejoin it properly.

Wrapping Up

Make sure to lab your environment out first before making the changes to your production ESXi hosts. This helps to catch things like the firewall rules mentioned earlier. It also helps to get a feel for the process in your environment.

Subscribe to VirtualizationHowto via Email ๐Ÿ””

Enter your email address to subscribe to this blog and receive notifications of new posts by email.



Brandon Lee

Brandon Lee is the Senior Writer, Engineer and owner at Virtualizationhowto.com, and a 7-time VMware vExpert, with over two decades of experience in Information Technology. Having worked for numerous Fortune 500 companies as well as in various industries, He has extensive experience in various IT segments and is a strong advocate for open source technologies. Brandon holds many industry certifications, loves the outdoors and spending time with family. Also, he goes through the effort of testing and troubleshooting issues, so you don't have to.

Related Articles

2 Comments

  1. HI๏ผŒBrandon๏ผŒI would like to know if it’s possible to shut down the ESXI hosts of the entire vSAN cluster, modify the ESXI management IP, and then rejoin them to the cluster. Thank you๏ผ

    1. Hardy,

      I would do this one host at a time. Are the hosts added to the cluster via IP or FQDN? Also, is this vSphere Standard Switches or Distributed Switches? vSAN or traditional storage? FQDN makes things easier for sure since the hostname in the cluster won’t change only DNS needs to change. However, if the ESXi host is added to the cluster via IP and you have vSphere Standard Switches, the process I would go through would be to maintenance mode one host at a time. It is important to migrate the VMs off to other hosts. The reason for this is if you leave VMs on the host and it disconnects and you bring back in as another IP, all the VMs on that host will have new VM IDs generated. Most backup solutions will have problems with this and will require a new full backup. Once in maintenance mod, you don’t have to shutdown the host entirely. Go to the console of the host, change the management IP address on the host. The original IP’ed host will go to “not responding” in the cluster. Just remove the host from the cluster. Add the host back to the cluster with the new IP. The storage networks, vMotion networks, etc will remain intact with the same subnets, so you won’t see any change with that. The cluster will reconfigure HA on the host. Once back in the cluster, test vMotion, and other things, move a non-critical VM over there and make sure things work as expected. Then just rinse and repeat.

      Brandon

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.