May 23

Location of vSphere Profile-Driven Storage log files (KB: 2056646)

Purpose

This article provides the default location of the vSphere Profile-Driven Storage logs. It may be necessary to use these log files when troubleshooting issues and VMware Support may request these files when creating a Support Request.

Resolution

The vSphere Profile-Driven Storage logs are placed in a different directory on disk depending on the vCenter Server version and the deployed platform:
  • vCenter Server 5.x and earlier versions on Windows XP, 2000, 2003: %ALLUSERSPROFILE%\Application Data\VMware\Infrastructure\Profile-Driven Storage\Logs
  • vCenter Server 5.x and earlier versions on Windows Vista, 7, 2008: C:\ProgramData\VMware\Infrastructure\Profile-Driven Storage\Logs
  • vCenter Server 5.x Linux Virtual Appliance: /var/log/vmware/vpx/sps

    Note: If the service is running under a specific user, the logs may be located in the profile directory of that user instead of %ALLUSERSPROFILE%.

vSphere Profile-Driven Storage logs are grouped by component and purpose:

  • sps.log: The main Profile-Driven Storage logs, consisting of all vCenter Server and Management Webservices connections, internal tasks and events, and information about the storage profile integrity.
  • vim-sps-install.log: This file consists of information about the installation of Profile-Driven Storage including computer name, operating system revision, the date of installation and the number of revisions that have been installed or upgraded on the system.
  • wrapper.log: This file provides information about the state of the Java runtime environment.

    Note: As each log grows, it is rotated over a series of numbered component-nnn.log files. On some platforms, the rotated logs are compressed.

Additional Information

You cannot view vSphere Profile-Driven Storage logs using vSphere Client or vSphere Web Client. To export these logs, see  Collecting diagnostic information for VMware vCenter Server (1011641).

For related information, see the main article in this series Location of log files for VMware products (1021806).

See Also


May 23

Location of vCenter Inventory Service log files (KB: 2056632)

Purpose

This article provides the default locations of the vCenter Inventory Service logs. It may be necessary to use these log files when troubleshooting issues and VMware Support may request these files when creating a Support Request.

Resolution

The vCenter Inventory Service logs are placed in a different directory on disk depending on the vCenter Server version and the deployed platform:
  • vCenter Server 5.x and earlier versions on Windows XP, 2000, 2003: %ALLUSERSPROFILE%\Application Data\VMware\Infrastructure\Inventory Service\Logs
  • vCenter Server 5.x and earlier versions on Windows Vista, 7, 2008: C:\ProgramData\VMware\Infrastructure\Inventory Service\Logs
  • vCenter Server 5.x Linux Virtual Appliance: /var/log/vmware/vpx/inventoryservice

    Note: If the vCenter Inventory Service is running under a specific user, the logs may be located in the profile directory of that user instead of %ALLUSERSPROFILE%.

vCenter Inventory Service logs are grouped by component and purpose:
  • ds.log: The main vCenter Inventory Service logs, consisting of all vCenter Server and Single Sign-On connections, internal tasks and events, and information about the xDB.
  • vim-is-install.log: This file contains information about the installation of Inventory Service including computer name, operating system revision, the date of installation and the number of revisions that have been installed or upgraded on the system.
  • wrapper.log: This file provides information about the status of the Java runtime environment.

    Note: As each log grows, it is rotated over a series of numbered component-nnn.log files. On some platforms, the rotated logs are compressed.

To collect the vSphere 5.1 vCenter Inventory Service logs, navigate to Start > All Programs > VMware > Generate Inventory Service log bundle.

Additional Information

You cannot view vCenter Inventory Service logs using the vSphere Client or vSphere Web Client. To export these logs, see Collecting diagnostic information for VMware vCenter Server (1011641).
 
For related information, see the main article in this series Location of log files for VMware products (1021806).


May 23

Location of vCenter Server log files (KB: 1021804)

Purpose

This article provides the default location of the vCenter Server logs.

Resolution

The vCenter Server logs are placed in a different directory on disk depending on vCenter Server version and the deployed platform:

  • vCenter Server 5.x and earlier versions on Windows XP, 2000, 2003: %ALLUSERSPROFILE%\Application Data\VMware\VMware VirtualCenter\Logs\
  • vCenter Server 5.x and earlier versions on Windows Vista, 7, 2008: C:\ProgramData\VMware\VMware VirtualCenter\Logs\
  • vCenter Server 5.x Linux Virtual Appliance: /var/log/vmware/vpx/
  • vCenter Server 5.x Linux Virtual Appliance UI: /var/log/vmware/vami

    Note: If the service is running under a specific user, the logs may be located in the profile directory of that user instead of %ALLUSERSPROFILE%.

vCenter Server logs are grouped by component and purpose:

  • vpxd.log: The main vCenter Server logs, consisting of all vSphere Client and WebServices connections, internal tasks and events, and communication with the vCenter Server Agent (vpxa) on managed ESX/ESXi hosts.
  • vpxd-profiler.log, profiler.log and scoreboard.log: Profiled metrics for operations performed in vCenter Server. Used by the VPX Operational Dashboard (VOD) accessible at https://VCHostnameOrIPAddress/vod/index.html.
  • vpxd-alert.log: Non-fatal information logged about the vpxd process.
  • cim-diag.log and vws.log: Common Information Model monitoring information, including communication between vCenter Server and managed hosts’ CIM interface.
  • drmdump\: Actions proposed and taken by VMware Distributed Resource Scheduler (DRS), grouped by the DRS-enabled cluster managed by vCenter Server. These logs are compressed.
  • ls.log: Health reports for the Licensing Services extension, connectivity logs to vCenter Server.
  • vimtool.log: Dump of string used during the installation of vCenter Server with hashed information for DNS, username and output for JDBC creation.
  • stats.log: Provides information about the historical performance data collection from the ESXi/ESX hosts
  • sms.log: Health reports for the Storage Monitoring Service extension, connectivity logs to vCenter Server, the vCenter Server database and the xDB for vCenter Inventory Service.
  • eam.log: Health reports for the ESX Agent Monitor extension, connectivity logs to vCenter Server.
  • catalina.<date>.log and localhost.<date>.log: Connectivity information and status of the VMware Webmanagement Services.
  • jointool.log: Health status of the VMwareVCMSDS service and individual ADAM database objects, internal tasks and events, and replication logs between linked-mode vCenter Servers.
  • Additional log files:
    • manager.<date>.log
    • host-manager.<date>.log
Note: As each log grows, it is rotated over a series of numbered component-nnn.log files. On some platforms, the rotated logs are compressed.

vCenter Server logs can be viewed from:

  • The vSphere Client connected to vCenter Server 4.0 and higher – Click Home > Administration > System Logs.
  • The Virtual Infrastructure Client connected to VirtualCenter Server 2.5 – Click Administration > System Logs.
  • From the vSphere 5.1 and 5.5 Web Client – Click Home > Log Browser, then from the Log Browser, click Select object now, choose an ESXi host or vCenter Server object, and click OK.

May 22

Location of log files for VMware products (KB: 1021806)

Purpose

This article provides links to determine the default location of the most common log files for VMware products.

Resolutions

To determine the default log locations for VMware products, see the most relevant document.

vSphere Suite

vCenter Server (formerly VirtualCenter Server):

Update Manager

ESX(i)

vSphere Data Recovery

vSphere Storage Appliance:

Site Recovery Manager


vCloud Suite

vCloud Director

vShield/vCloud Networking and Security (vCNS)

Desktop Computing Suite

View and Horizon View

Horizon Mirage (formerly Mirage)

Aditional Information

See Also

May 22

Location of ESXi 5.1 and 5.5 log files (KB: 2032076)

Purpose

This article provides the default location of log files on ESXi 5.1 and 5.5 hosts.
 
For other products and versions, see Location of log files for VMware products (1021806).

Resolutions

Note: Documentation contents referenced below for vSphere 5.1 and 5.5 are the same and can be used interchangeably.

You can review ESXi 5.1 and 5.5 host log files using these methods:

ESXi 5.1 Host Log Files

Logs for an ESXi 5.1 host are grouped according to the source component:

  • /var/log/auth.log: ESXi Shell authentication success and failure.
  • /var/log/dhclient.log: DHCP client service, including discovery, address lease requests and renewals.
  • /var/log/esxupdate.log: ESXi patch and update installation logs.
  • /var/log/lacp.log: Link Aggregation Control Protocol logs.
  • /var/log/hostd.log: Host management service logs, including virtual machine and host Task and Events, communication with the vSphere Client and vCenter Server vpxa agent, and SDK connections.
  • /var/log/hostd-probe.log: Host management service responsiveness checker.
  • /var/log/rhttpproxy.log: HTTP connections proxied on behalf of other ESXi host webservices.
  • /var/log/shell.log: ESXi Shell usage logs, including enable/disable and every command entered. For more information, see vSphere 5.5 Command-Line Documentation and Auditing ESXi Shell logins and commands in ESXi 5.x (2004810).
  • /var/log/sysboot.log: Early VMkernel startup and module loading.
  • /var/log/boot.gz: A compressed file that contains boot log information and can be read using zcat /var/log/boot.gz|more.
  • /var/log/syslog.log: Management service initialization, watchdogs, scheduled tasks and DCUI use.
  • /var/log/usb.log: USB device arbitration events, such as discovery and pass-through to virtual machines.
  • /var/log/vobd.log: VMkernel Observation events, similar to vob.component.event.
  • /var/log/vmkernel.log: Core VMkernel logs, including device discovery, storage and networking device and driver events, and virtual machine startup.
  • /var/log/vmkwarning.log: A summary of Warning and Alert log messages excerpted from the VMkernel logs.
  • /var/log/vmksummary.log: A summary of ESXi host startup and shutdown, and an hourly heartbeat with uptime, number of virtual machines running, and service resource consumption. For more information, see Format of the ESXi 5.0 vmksummary log file (2004566).
  • /var/log/Xorg.log: Video acceleration.

Note: For information on sending logs to another location (such as a datastore or remote syslog server), see Configuring syslog on ESXi 5.0 (2003322).

Logs from vCenter Server Components on ESXi 5.1 and 5.5

When an ESXi 5.1 / 5.5 host is managed by vCenter Server 5.1 and 5.5, two components are installed, each with its own logs:

  • /var/log/vpxa.log: vCenter Server vpxa agent logs, including communication with vCenter Server and the Host Management hostd agent.
  • /var/log/fdm.log: vSphere High Availability logs, produced by the fdm service. For more information, see the vSphere HA Security section of the vSphere Availability Guide.

See Also


May 22

Troubleshooting Fault Domain Manager (FDM) issues in VMware vCenter Server 5.0/5.1 (KB: 2004429)

Symptoms

  • After upgrading from VMware vCenter Server 4.x to 5.0, VMware High Availability (HA) is no longer working.
  • A red exclamation mark displays on the Cluster Object.
  • You are unable to enable VMware HA.
  • Enabling VMware HA fails.

Purpose

This article discusses troubleshooting a component of HA in vSphere 5.x.

For information about troubleshooting HA in vSphere 4.x, see Troubleshooting VMware High Availability (HA) (1001596).

Resolutions

Because vCenter Server 5.0 uses Fault Domain Manager (FDM) agents for High Availability (HA), rather than Automated Availability Manager (AAM) agents, the troubleshooting process has changed.

There are other architectural and feature differences that affect the troubleshooting process:

  • One main log file (/var/log/fdm.log) and syslog integration
  • Datastore Heartbeat
  • Reduced Cluster configuration (approximately 1 minute, as opposed to 1 minute per host)
  • FDM does not require that DNS be configured on the hosts, nor does FDM rely on other Layer 3 to 7 network services. For more information, see How vSphere HA works in the vSphere Availability Guide.

For more information about HA in vSphere 5.x, see Comparing VMware HA 4.x and vSphere HA 5.0 (2004401).

Known Issues

  • If SSL Certificate checking is disabled in vCenter Server, configuration can fail with Cannot complete the configuration of the vSphere HA agent on the host.
  • On an upgrade using custom SSL certificates, the configuration can fail with vSphere HA cannot be configured on this host because it’s SSL thumbprint has not been verified.
  • If the webpage on an ESXi host has been disabled, configuration can fail with Unknown installer error.
  • If VMware-fdm-uninstall.sh is run manually in the default location, it does not properly remove the HA package. Configuration can fail with unknown installer error.
  •  If lockdown mode is enabled on an ESXi host, HA configuration can fail with Cannot install the vCenter agent service,vSphere HA agent cannot be correctly installed or configured, Permission to perform this operation was denied.
  • Migrating a virtual machine from one HA cluster to another changes the virtual machine’s protection state from Protected to Unprotected.
  • FDM goes into an uninitialized state when a security scan is run against an ESXi 5 host. This is resolved in vCenter 5.0 Update 2.
  • For related information, see the vCenter 5.0 Update 1 Release Notes and the vCenter 5.0 Update 2 Release Notes

    Common Misconfiguration Issues

    • FDM configuration can fail if ESX hosts are connected to switches with automatic anti-DOS features. For more information, see HA fails to configure at 90% completion with the error: Internal AAM Error – agent could not start (1018217).

    • FDM does support Jumbo Frames, but the MTU setting has to be consistent from end to end on every device.

    • Some firewall devices block ICMP pings that have an ID of zero. In such cases, FDM could report that some/all slave hosts cannot ping each other, and/or that the isolation addresses cannot be reached. This issue has been resolved in: 

    Resolution

    To resolve the issue with FDM:

    1. Check the Release Notes for known issues. Ensure that you are you using the latest version of vSphere.
    2. Ensure that you have properly configured HA. For information, see How vSphere HA works in the vSphere Availability Guide.
    3. Verify that network connectivity exists from the vCenter Server to the ESXi host. For more information, see Testing network connectivity with the ping command (1003486).
    4. Verify that the ESXi Host is properly connected to vCenter Server. For more information, see Changing an ESXi or ESX host’s connection status in vCenter Server (1003480).
    5. Verify that the datastore used for HA heartbeats is accessible by all hosts.
    6. Verify that all the configuration files of the FDM agent were pushed successfully from the vCenter Server to your ESXi host:
      • Location: /etc/opt/vmware/fdm
      • File Names: clusterconfig (cluster configuration), compatlist (host compatibility list for virtual machines), hostlist (host membership list), and fdm.cfg.
    7. Increase the verbosity of the FDM logs to get more information about the the cause of the issue. For more information, see Changing the verbosity of the VMware High Availability Management Agent (FDM) logs (2004540).
    8. Search the log files for any error message:
      • /var/log/fdm.log or /var/run/log/fdm* (one log file for FDM operations)
      • /var/log/fdm-installer.log (FDM agent installation log)
    9. Consult FDM’s Managed Object Browser (MOB), at https://<hostname>/mobfdm, for more information. The MOB can be used to dump debug information about FDM to /var/log/vmware/fdm/fdmDump.log. It can also provide key information about the status of FDM from the perspective of the local ESX server: a list of protected virtual machines, slaves, events etc. For more information, see Managed Object Browser in the vSphere Web Services SDK Programming Guide.

    If the issue persists: 


    May 22

    vSphere HA agent is unreachable and the Summary tab of the ESXi host reports the error: vSphere HA reports that an agent is in the Agent Unreachable state (KB: 2011192)

    Symptoms

    • The vSphere HA agent is unreachable
    • In the Summary tab of the affected ESXi host, you see the error:

      vSphere HA reports that an agent is in the Agent Unreachable state

    • Restarting the management agents does not resolve the issue
    • Restarting the Virtual Center service does not resolve the issue

    Cause

    This issue occurs if there is a network problem that prevents vCenter Server from contacting the master host and the agent on the host or if all hosts in the cluster have failed. This issue may also occur if the agent on the host has failed and the watchdog process is unable to restart it.

    Resolutions

    To resolve this issue:
    1. Determine if vCenter Server is reporting the host as Not Responding. For more information,see Diagnosing an ESX or ESXi host that is Disconnected or Not Responding in vCenter Server (1003409).
    2. If the host is in a Not Responding state, there is a network problem or a total cluster failure. After you resolve this condition, vSphere HA should configure correctly.
    3. If the vCenter Server reports the hosts as responding:
      • Enable the SSH access to the host.
      • Connect to the ESXi host using SSH.
      • Review the /var/log/vpxa.log file and check if there are errors related to communication with vCenter Server and the host Management Agent (hostd).
      • Review the /var/log/fdm.log file (Fault Domain Manager log) and check if there are errors related to vSphere High Availability. For more information, see the vSphere 5.1 Availability Guide or the vSphere 5.0 Availability Guide.
    4. Right-click the affected host and click Reconfigure for vSphere HA.
    For additional troubleshooting information, see:

    See Also

    Update History

    01/13/2014 – Added ESXi and vCenter Server 5.1 to Product Versions.


    May 22

    Best Practice: How to correctly remove a LUN from an ESX host

    Yes, at first glance, you may be forgiven for thinking that this subject hardly warrants a blog post. But for those of you who have suffered the consequences of an All Paths Down (APD) condition, you’ll know  why this is so important.

    Let’s recap on what APD actually is.

    APD is when there are no longer any active paths to a storage device from the ESX, yet the ESX continues to try to access that device. When hostd tries to open a disk device, a number of commands such as read capacity and read requests to validate the partition table are sent. If the device is in APD, these commands will be retried until they time out. The problem is that hostd is responsible for a number of other tasks as well, not just opening devices. One task is ESX to vCenter communication, and if hostd is blocked waiting for a device to open, it may not respond in a timely enough fashion to these other tasks. One consequence is that you might observe your ESX hosts disconnecting from vCenter.

    We have made a number of improvements to how we handle APD conditions over the last number of releases, but prevention is better than cure, so I wanted to use this post to highlight once again the best practices for removing a LUN from an ESX host and avoid APD:

    ESX/ESXi 4.1

    Improvements in 4.1 means that hostd now checks whether a VMFS datastore is accessible or not before issuing I/Os to it. This is an improvement, but doesn’t help with I/Os that are already in-flight when an APD occurs. The best practices for removing a LUN from an ESX 4.1 host, as described in detail in KB 1029786, are as follows:

    1. Unregister all objects from the datastore including VMs and Templates
    2. Ensure that no 3rd party tools are accessing the datastore
    3. Ensure that no vSphere features, such as Storage I/O Control, are using the device
    4. Mask the LUN from the ESX host by creating new rules in the PSA (Pluggable Storage Architecture)
    5. Physically unpresent the LUN from the ESX host using the appropriate array tools
    6. Rescan the SAN
    7. Clean up the rules created earlier to mask the LUN
    8. Unclaim any paths left over after the LUN has been removed

    Now this is a rather complex set of instructions to follow. Fortunately, we have made things a little easier with 5.0.

    ESXi 5.0

    The first thing to mention in 5.0 is that we have introduced a new Permanent Device Loss (PDL) condition – this can help alleviate some of the conditions which previously caused APD. But you could still run into it if you don’t correctly remove a LUN from the ESX. There are details in the post about the enhancements made in the UI and the CLI to make the removal of a LUN easier. But there are KB articles that go into even greater detail.

    To avoid the rather complex set of instructions that you needed to follow in 4.1, VMware introduced new detach and unmount operations to the vSphere UI & the CLI.

    As per KB 2004605, to avoid an APD condition in 5.0, all you need to do now is to detach the device from the ESX. This will automatically unmount the VMFS volume first. If there are objects still using the datastore, you will be informed. You no longer have to mess about creating and deleting rules in the PSA to do this safely. The steps now are:

    1. Unregister all objects from the datastore including VMs and Templates
    2. Ensure that no 3rd party tools are accessing the datastore
    3. Ensure that no vSphere features, such as Storage I/O Control or Storage DRS, are using the device
    4. Detach the device from the ESX host; this will also initiate an unmount operation
    5. Physically unpresent the LUN from the ESX host using the appropriate array tools
    6. Rescan the SAN

    This KB article is very good since it also tells you which features (Storage DRS, Storage I/O Control, etc) may prevent a successful unmount and detach.

    Please pay particular attention to these KB articles if/when you need to unpresent a LUN from an ESX host.

    Get notification of these blogs postings and more VMware Storage information by following me on Twitter: Twitter @vmware360


    May 15

    Configuring static routes for vmkernel ports on an ESXi host (KB: 2001426)

    Purpose

    This article provides a command to configure routes to additional gateways for vmkernel ports on an ESXi host.

    Resolution

    Unlike ESX, ESXi does not have a service console. The management network is on a vmkernel port and, therefore, uses the default vmkernel gateway. Only one vmkernel default gateway can be configured on an ESXi/ESX host. You can, however, add static routes to additional gateways/routers from the command line.
     
    To configure a static route to a second gateway/router for the management network: 
    1. Open a console to the ESXi or ESX host. For more information, see  Using Tech Support Mode in ESXi 4.1 and ESXi 5.x (1017910) or Tech Support Mode for Emergency Support (1003677).
    2. In ESXi 4.x and 5.0, run the command: esxcfg-route -a target_network_IP netmask default_gateway For example, to add a route to the 192.168.100.0 network with a /24 bit subnet mask (255.255.255.0) through a router with an IP address of 192.168.0.1, run one of these commands:
      • esxcfg-route -a 192.168.100.0/24 192.168.0.1
        Or
      • esxcfg-route -a 192.168.100.0 255.255.255.0 192.168.0.1

      Note: In ESXi 5.0, static routes are not persistent across reboots. To ensure that any added static routes are persistent, add the command to the /etc/rc.local file. For more information, see Modifying the rc.local or sh.local file in ESX/ESXi to execute commands while booting (2043564). In ESXi 5.1 and ESXi 5.5, run the command: esxcli network ip route ipv4/ipv6 add --gateway IPv4_address_of_router --network IPv4_address For example, to add a route to 192.168.100.0 network with a /24 bit subnet mask (255.255.255.0) through a router with an IP address of 192.168.0.1, run this command: esxcli network ip route ipv4 add --gateway 192.168.0.1 --network 192.168.100.0/24
    3. When finished, check the host's current routing table with the esxcfg-route -l command. Any static routes display in the output.

    Note: The Host Profile feature in vCenter Server does not save or apply static routes with ESXi 5.0 and 4.x hosts. In ESXi 5.1 and ESXi 5.5, any manually configured static routes are saved or applied using Host Profiles. In order for this functionality to work correctly, the static routes must be added by the process outlined in steps 1-3, then a host profile created from the host. This profile can then be applied to other hosts, which includes the static routes.

    Additional Information

    To successfully add a static route, the host must have direct subnet access to the router being specified via one of its vmkernel ports. If not, it cannot communicate with the gateway router and reports the error: Unable to route to gateway address x.x.x.x no route to that subnet exists

    For example, to route to the 192.168.100.0/24 network through the gateway router 192.168.0.1, the host must have a vmkernel port configured in the 192.168.0.0/24 network. Without this vmkernel port, it cannot communicate with 192.168.0.1 to forward traffic relating to this static route.

    You must ensure that an entry exists for the network in the host's routing table as a Local Subnet Access network. To validate this, use the esxcfg-route -l command.

    You cannot add a new gateway for an existing subnet in the vmkernel as you cannot have two default gateway. In the aforementioned example, if an attempt is made to add a new gateway for 192.168.100.0/24 network, an error similar to this occurs: Duplicate route to network x.x.x.x/xx found. Please delete the old route first.

    Note: When configuring routes in auto-deploy, the preferred way to create custom network entries use the answer file from a reference host. For more information, see VMware AutoDeploy Documentation Center.

    Related Education

    See Also

    May 15

    Troubleshooting vSphere Auto Deploy (2000988)

    Symptoms

    This article provides troubleshooting guidance on VMware Auto Deploy.

    Auto Deploy is a new feature of vSphere 5.0 that can be used to quickly provision and configure ESXi hosts.

    Note: For more information, see the vSphere Installation and Setup Guide. The guide contains definitive information. If there is a discrepancy between the guide and the article, assume the guide is correct.

    Resolution

    VMware Auto Deploy can be installed on a standalone Windows machine. VMware Auto Deploy is also available with the VMware vCenter Appliance.

    Important files and locations

    Note: Some of these paths may be hidden by default.

    Element Location
    Default installation paths for the Auto Deploy server
    • 32-bit: C:\Program Files (x86)\VMware\VMware vSphere Auto Deploy
    • 64-bit: C:\Program Files (x86)\VMware\VMware vSphere Auto Deploy
    Configuration files for the Auto Deploy server
    • 32-bit: C:\Documents and Settings\All Users\Application Data\VMware\VMware vSphere Auto Deploy
    • 64-bit: C:\ProgramData\VMware\VMware vSphere Auto Deploy
    Configuration files for the vCenter Server Appliance
    • Configuration files: /etc/vmware-rbd
    • Runtime state files: /var/lib/rbd
    Database files for the Auto Deploy server
    • 32-bit: C:\Documents and Settings\All Users\Application Data\VMware\VMware vSphere Auto Deploy\Data\db*
    • 64-bit: C:\ProgramData\VMware\VMware vSphere Auto Deploy\Data\db*
    Database files for the vCenter Server Appliance
    • /var/lib/rbd/db*
    Cache for the Auto Deploy server
    • C:\Users\All Users\VMware\VMware vSphere Auto Deploy
    • 32-bit: C:\Documents and Settings\All Users\Application Data\VMware\VMware vSphere Auto Deploy\Data\cache\
    • 64-bit: C:\ProgramData\VMware\VMware vSphere Auto Deploy\Data\cache\
    Cache for the vCenter Server Appliance
    • /var/lib/rbd/cache
    Main configuration files for the Auto Deploy server
    • 32-bit: C:\Documents and Settings\All Users\Application Data\VMware\VMware vSphere Auto Deploy\vmconfig-autodeploy.xml
    • 64-bit: C:\ProgramData\VMware\VMware vSphere Auto Deploy\vmconfig-autodeploy.xml

    Logging

    Obtaining log files

    To obtain log files through vCenter Server:

    1. Log into vCenter Server with the vSphere Client.
    2. Click Home > Auto Deploy > Download AutoDeploy Log Files.

    To manually obtain log files for the Auto Deploy server, go to:

    %configuration file location%\Logs

    To manually obtain log files for the vCenter Server Appliance, go to one of these locations:

    • /var/log/vmware/rbd
    • /etc/vmware-rbd/httpd/logs

    Note: vCenter Server logs do not include Auto Deploy Logs, and must be collected separately as above.

    Increasing log file size and rotation

    To increase the log file size and rotation:

    1. Open the logging.conf file with a text editor. The file is located at:
      • 32-bit Windows:

        C:\Documents and Settings\All Users\Application Data\VMware\VMware vSphere Auto Deploy\logging.conf

      • 64-bit Windows:

        C:\ProgramData\VMware\VMware vSphere Auto Deploy\logging.conf

      • vCenter Server Appliance:

        /etc/vmware-rbd/logging.conf

    2. Change the size value to 1000000 and the backupCount value to 5. For example:

      [autodeploy]
      size=1000000
      backupCount=5

    3. Save and close the file.
    4. Restart the Auto Deploy service.

    Common tasks

    Editing the Auto Deploy service configuration

    To edit the Auto Deploy service configuration:

    1. Open the main Auto Deploy configuration file with a text editor. The file is located at:
      • 32-bit Windows:

        C:\Documents and Settings\All Users\Application Data\VMware\VMware vSphere Auto Deploy\vmconfig-autodeploy.xml

      • 64-bit Windows:

        C:\ProgramData\VMware\VMware vSphere Auto Deploy\vmconfig-autodeploy.xml

      • vCenter Server Appliance:

        /etc/vmware-rbd/autodeploy-setup.xml

    2. Adjust these parameters:
      • <serviceAddress>IP_address</serviceAddress>

        Where IP_address is the Auto Deploy IP address.

      • <defaultValues>
          
        <port>port_number</port>
          
        <maxSize>max_cache_size</maxSize>
        </defaultValues>

        Where port_number is the Auto Deploy port, and max_cache_size is the maximum cache size in GB.

      • <vCenterServer>
          
        <address>IP_address</address>
          
        <port>port_number</port>
          
        <user>username</user>
        </vCenterServer>

        Where IP_address is the vCenter Server IP address, port_number is the vCenter Server port, and username is the vCenter Server user name.

    3. Save and close the file.
    4. Restart the Auto Deploy service.

    Note:

    • This information is also stored in the vCenter Server registry on the Auto Deploy Server located at:

      HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\VMware, Inc.\VMware vSphere Auto Deploy

    • Before making any registry modifications, ensure that you have a current and valid backup of the registry and the virtual machine. For more information on backing up and restoring the registry, see the Microsoft article 136393.

    Re-registering the Auto Deploy service

    When re-registering the Auto deploy service with vCenter Server, the Auto Deploy rules might need to be rebuilt on vCenter Server. You may also have to re-register the Auto Deploy service if the vCenter Server or Auto Deploy IP addresses change, if the service cannot start or if the SSL certificates change..

    Both Windows and the vCenter Server Appliance Auto Deploy register commands use the same syntax and switches.

    • Run the commands from:
      • On Windows:

        C:\Program Files (x86)\VMware\VMware vSphere Auto Deploy\autodeploy-register.exe

      • On the vCenter Server Appliance:

        /usr/bin/autodeploy-register

    • To unregister the Auto Deploy service, run the command:

      autodeploy-register -U -a x.x.x.x -u root -w vmware -p 80

      Where x.x.x.x is the vCenter Server IP address, -u root is the user, and -w vmware is the password.

    • To register the Auto Deploy service, run the command:

      On Windows:

      autodeploy-register -R -a x.x.x.x -u root -w vmware -p 80 -s "C:\ProgramData\VMware\VMware vSphere Auto Deploy\vmconfig-autodeploy.xml"

      On the vCenter Server Appliance:

      autodeploy-register -R -a x.x.x.x -u root -w vmware -p 80 -s /etc/vmware-rbd/autodeploy-setup.xml

      Where x.x.x.x is the vCenter Server IP address, -u root is the user, and -w vmware is the password.

    • If the SSL certificates have changed use this command to register the Auto Deploy service:

      On Windows:

      autodeploy-register.exe -R -a vc.domain.com -u root -w vmware -s "C:\ProgramData\VMware\VMware vSphere Auto Deploy\vmconfig-autodeploy.xml" -f -T <new_vCenter_Server_SSL_Cert_Thumbprint>

      On the vCenter Server Appliance:

      autodeploy-register.exe -R -a vc.domain.com -u root -w vmware -s "/etc/vmware-rbd/autodeploy-setup.xml" -f -T <new_vCenter_Server_SSL_Cert_Thumbprint>

    Troubleshooting

    To determine information about registered Auto Deploy ESXi hosts:

    1. Access this URL in a web browser:

      https://x.x.x.x:port_number/vmw/rbd/host/

      Where:

      • x.x.x.x is the Auto deploy Server IP  address
      • port_number is the Auto Deploy port (6501 by default)
    2. Click on each hash link. Each hash represents an ESXi host that has registered with the Auto Deploy service. The page displays information about the ESXi host, including the DHCP/TFTP that was used to boot, the server model, and the MAC address.

      For example:

      Host List:

      5dc289181e9eecc49590d01fa32b0f42

      • hostname=
      • ipv4=192.168.0.100
      • mac=00:0c:29:4c:8c:29
      • uuid=564df633-d8d2-908b-4107-82da654c8c29
      • vendor=VMware,Inc
    3. There are also two links for boot.cfg and get boot.cfg.
      • Click on the boot.cfg link (or the Get gPXE Configuration link) to view information about the host and the profiles being used.

        For example:

        #!gpxe

        echo
        echo
        echo ******************************************************************
        echo * Booting through VMware Auto Deploy…
        echo *
        echo * Machine attributes:
        echo * . asset=No Asset Tag
        echo * . domain=
        echo * . hostname=
        echo * . ipv4=192.168.0.100
        echo * . mac=00:0c:29:4c:8c:29
        echo * . model=VMware Virtual Platform
        echo * . oemstring=[MS_VM_CERT/SHA1/27d66596a61c48dd3dc7216fd715126e33f59ae7]
        echo * . oemstring=Welcome to the Virtual Machine
        echo * . serial=VMware-56 4d f6 33 d8 d2 90 8b-41 07 82 da 65 4c 8c 29
        echo * . uuid=564df633-d8d2-908b-4107-82da654c8c29
        echo * . vendor=VMware, Inc.
        echo *
        echo * Host Profile: hostprofile-1
        echo * Image Profile: ip-VMware, Inc.-test1-d5107713e36092ff920705dbf627a092
        echo * VC Host: host-14
        echo *
        echo * Bootloader VIB version: 5.0.0-1.2.381531

        echo ******************************************************************

      • Click on the Get boot.cfg link to view information about the cached files being used to boot the server.

        For example:

        bootstate=0
        title=Loading VMware ESXi
        kernel=/vmw/cache/a3/36c0980af7357f5d515242e6458be5/tboot.aaef3f985d1dfc669c9490939c82e36f
        kernelopt=BOOTIF=01-00-0c-29-4c-8c-29
        modules=/vmw/cache/72/1776e38e761db08fff0db5edec43af/b.e174d89c00afa21ae697977203c2b9ce — /vmw/cache/72/1776e38e761db08fff0db5edec43af/useropts.e174d89c00afa21ae697977203c2b9ce — /vmw/cache/72/1776e38e761db08fff0db5edec43af/k.e174d89c00afa21ae697977203c2b9ce — /vmw/cache/a3/36c0980af7357f5d515242e6458be5/a.aaef3f985d1dfc669c9490939c82e36f — /vmw/cache/ff/eae6feb2e63579c776a6041b3de0da/ata-pata.5fa67d0ce923ca8647a45c431c385879

    Database corruption

    vSphere Auto Deploy utilizes a database to store information about hosts. It is possible that the Auto Deploy database may need maintenance. SQLite is the tool of choice for performing these activities. By default, SQLite comes with the vCenter Server Appliance only. However, the SQLite shell is a free application and is available for use with Windows at http://sqlite.org/download.html.

    • To connect to the Auto Deploy database, run the command:

      sqlite3 "C:\Users\All Users\VMware\VMware vSphere Auto Deploy\Data\db"

    • To verify you are connected to the Auto Deploy database, run the command:

      sqlite> .databases

    • To perform an integrity check of the entire database, run the command:

      sqlite> PRAGMA integrity_check;

      If the database is intact, the output from this command is OK.

    • To reclaim empty or free space from the database, run the command:

      sqlite> VACUUM;

    • To delete and recreate indices from scratch (which may improve performance), run the command:

      sqlite> REINDEX;

    Verifying Auto Deploy profiles and rulesets

    For troubleshooting purposes, it may be necessary to determine which rules are currently applied, and if a particular host meets this compliance. These commands are used with the vSphere 5.0 PowerCLI and the ImageBuilder Snap-in.

    To determine currently deployed rules, run the commands:

    • get-deployruleset

      The output displays the active ruleset to be used with Auto Deploy. An active ruleset is a collection of image/host profiles and the patterns with which these rules match (for example, mac address, vendor, IP address range).

    • test-deployrulesetcompliance ESXhostname

      The output displays a list of the current profiles applied to the host and the expected profile. This is useful for determining if a host is up to date.

    gPXE Troubleshooting

    During initial boot, an ESXi host attempts to obtain an IP address via DHCP using PXE. In the unlikely event that the server fails to load the PXE boot image, it is possible to boot into a gPXE shell by pressing Ctrl+B. These commands can be useful for troubleshooting PXE networking configuration issues.

    • To attempt to obtain a DHCP address, run one of these commands:
      • dhcp net0 (the machine can then be pinged)
      • config net0
      • set net0/ip x.x.x.x
        set net0/netmask x.x.x.x
        set net0/gateway x.x.x.x
        set net0/dns x.x.x.x

    • To verify the routing information, run the command:

      route

    • To verify the PXE image loaded, run the command:

      imgstat

      Note: The default PXE image is vmw-hardwired.

    • To verify TFTP connectivity, run the command:

      imgfetch tftp://x.x.x.x/tramp