Wednesday, February 10, 2021

VMware PowerCLI: Modify ESXi Hosts Primary and Secondary DNS Servers using Command line.

VMware ESXi DNS Server Configuration/Modification Using PowerCLI

We can easily change Primary and Secondary DNS Servers of all ESXi servers in a vCenter via below simple steps.

Steps:

1. Download and install VMware PowerCLI on any of the Windows machine in network. If there is a firewall between PowerCLI installed machine and vCenter, Port 443 should be allowed in firewall. In my case I installed PowerCLI in vCenter server itself.

PowerCLI Download URL

2. Launch PowerCLI by right click PowerCLI shortcut and choose "Run As administrator"

3. Execute below Command first to connect PowerCLI to vCenter Server.

Connect-VIserver 172.25.1.10 

        Note: Change vCenter IP with your actual vCenter IP



4. copy and paste below commands by editing DNS server IP with your actual DNS Server.


$dnsaddress = @()

$dnsaddress += "192.168.111.10"

$dnsaddress += "192.168.111.11"

Get-VMHost | Get-VMHostNetwork -ErrorAction SilentlyContinue | Set-VMHostNetwork -DnsAddress $dnsaddress


5. The command will print the output with the details of each ESXi hosts and its new DNS server.




Monday, February 1, 2021

vCenter 6.0: VCSA Appliance is not booting, Stuck at progress bar

 Problem:

1. vCenter 6.0: VCSA Appliance is not booting, Stuck at progress white bar for long time.

2. Unable to login with root password in VCSA appliance even after reset the root password, it saying wrong credentials.

3. /dev/sda3 is full while monitoring from rescue mode.



Reason:

This behavior could be because of VCSA file system is full due to audit.log. Most of the time /dev/sda3 will become full due to audit.log. This will affect VCSA booting and even root account new password may not be able to reset.

Solution:

Clear audit.log by login to rescue mode.

1. Open VCSA console and Reboot VCSA appliance VM.
2. Press "Space bar" on GRUB screen to avoid auto boot.
3. Press "p" to choose boot options, enter GRUB password, default password is "vmware"
4. Select "VMware vCenter Server appliance" and press "e"
5. Select the line start with "Kernel /vmlinuz...." and press "e"
6. Append "init=/bin/bash" at end of last line after putting a "space"
7. Press enter, then type "b" to boot.
8. Once it is booted, go to the file location of audit.log by executing command "cd /var/log/audit"
9. Execute command "truncate -s 0 audit.log" to clear the audit.log.
10. Verify the disk utilization by running command "df -h"
11. Reboot the VCSA appliance by running below commands.

mkfifo /dev/initctl
reboot -f

VMware Applvolume : Error From Manager (Error Code 401) Invalid Session Cookie: Session Key Does Not Match Any Active Sessions

 

Problem:

VMware Applvolume giving error "Error From Manager (Error Code 401) Invalid Session Cookie: Session Key Does Not Match Any Active Sessions". Users are failed to connect the AppStack.



Reason and Solution

Reason 1:
As per VMware knowledge base, the issue is happening because of additional authentication calls for Agent-Manager communications and solution is to disable "Agent Session Cookie feature"
Refer KBASE for more details.

Reason 2:
I faced this issue recently, after long troubleshooting I identified issue was with vCenter. vCenter was not accepting any users to login, Then I identified VCSA appliance was out of disk space and I fixed issue by clearing the audit.log.



vSphere 6.7: VM Name is shows as /vmfs/volumes/xxxx, Unable to Power On VM, VM Files showing .vmx.lck and .vmx~

Problem:

Unbale to power on few virtual machines. VM names got changed suddenly with name similar to /vmfs/volumes/xxxx.

While browsing datastore, .vmx file icon is not showing as virtual machine file. Also the files .vmx.lck and .vmx~ are visible in virtual machine folder.





Reason:

This issue occurs whenever .vmx or vmdk is locked by any other esxi hosts. This locks preventing the files to lock by another esxi hosts.

Normally VM files are locked by host during power on and locks are released during power off. But some situations locks are not getting released even after VM is off and this issue occures.

Solution:

Identify which host is making this lock and clear the locks by killing the process or rebooting the esxi hosts. Some situations we cannot see any running process related to this locked file, in that situation only solution is to reboot the host.

Steps to Identify Lock:

1. Login to esxi host using SSH.

2. Execute command  

vmfsfilelockinfo -p <File path staring with /vmfs/volumes....> -v <vCenter IP> -u "administrator@vsphere.local"

Example:

vmfsfilelockinfo -p /vmfs/volumes/4444456-f3c44426-030f-3622222f000de/VM-0001/VM-0001.vmx -v 172.20.20.12 -u "administrator@vsphere.local"

3. This command will list the ESXi Host which have lock on this file. Identify the ESXi host and reboot it.