This article provides best practices to install or upgrade to VMware ESXi 6.0.
This article assumes that:
- You have read the vSphere Installation and Setup Guide for ESXi 6.0 or the vSphere Upgrade Guide for ESXi 6.0 upgrades. These guides contain definitive information. If there is a discrepancy between the guide and this article, assume that the guide is correct.
- You have upgraded your vCenter Server to version 6.0 prior to upgrading your ESXi hosts to version 6.0. For more information, see:
Installing vCenter Server 6.0 best practices (2107948)
Upgrading to vCenter Server 6.0 without migrating SQL database to vPostgres (2109321)
Upgrading to VMware vCenter Server 6.0 with an embedded Platform Services Controller from vCenter Server 5.5 installed using the simple install method (2109559)
Upgrading VMware vCenter Single Sign-on 5.5 to a VMware vCenter Server 6.0 Platform Services Controller 6.0 (2109560)
Upgrading VMware vCenter Server 5.5 to vCenter Server 6.0 with an external Platrfom Services Controller (2109562)
VMware provides several ways to install or upgrade to ESXi 6.0. For more information, see:
- Methods of installing ESXi 6.0 (2109708)
- Methods for upgrading to ESXi 6.0 (2109711)
ESXi 6.0 System Requirements
When installing or upgrading to ESXi 6.0, ensure that the host meets these minimum hardware configurations supported by ESXi 6.0:
1. Compatible hardware: Ensure your hardware is compliant on the VMware Compatibility Guide. This includes:
- System compatibility
- I/O compatibility (Network and HBA cards)
- Storage compatibility
- Backup software compatibility
2. Compatible CPU: Your hosts must have a supported and compatible processor. VMware ESXi 6.0 requires:
- A host with 2 or more CPU cores
- A 64-bit x86 processor released after September 2006. For a complete list of supported processors, see the VMware Compatibility Guide
- The NX/XD bit for the CPU must be enabled in the host BIOS.
- To support 64-bit virtual machines, support for hardware virtualization (Intel VT-x or AMD RVI) must be enabled on x64 CPUs.
Note: To determine whether your server has 64-bit VMware support, download the CPU Identification Utility from the VMware Website.
3. Sufficient memory: Your hosts must have at least 4 GB of RAM, 8 GB of RAM is recommended to take advantage of all features and run virtual machines in a typical production environment.
4. Sufficient network adapters: Your host has one or more Gigabit or faster Ethernet controllers. For a list of supported network adapter models, see the VMware Compatibility Guide.
ESXi 6.0 Booting Requirements
vSphere 6.0 supports booting ESXi hosts from the Unified Extensible Firmware Interface (UEFI). With UEFI, you can boot systems from hard drives, CD-ROM drives, USB media, or network. Provisioning with VMware Auto Deploy requires the legacy BIOS firmware and is not available with UEFI BIOS configurations.
Changing the host boot type between legacy BIOS and UEFI is not supported after you install ESXi 6.0. Changing the boot type from legacy BIOS to UEFI after you install ESXi 6.0 might cause the host to fail to boot. The host displays an error message similar to:
Not a VMware boot bank
ESXi can boot from a disk larger than 2 TB provided that the system firmware and the firmware on any add-in card that you are using supports it. Check your hardware documentation.
ESXi 6.0 has these storage requirements:
1 Gigabyte+ boot device: Installing or upgrading to ESXi 6.0 requires a minimum of a 1 GB boot device.
Note: Although a 1 GB USB or SD device suffices for a minimal installation, you should use a 4 GB or larger device. The extra space is used for an expanded coredump partition on the USB/SD device.
4 GB extra for scratch partition: When booting from a local disk, a SAN or an iSCSI LUN, a 5.2 GB disk is required to allow for the creation of the VMFS volume and a 4 GB scratch partition on the boot device.
If a smaller disk or LUN is used, the installer attempts to allocate a scratch region on another available local disk. If no local disk can be found to serve as a scratch partition, /scratch is located on the ESXi host ramdisk, linked to /tmp/scratch. You can later reconfigure /scratch to use a separate disk or LUN.
Due to the I/O sensitivity of USB and SD devices the installer does not create a scratch partition on these devices. Again, the host attempts to configure the /scratch on an available local disk, if no local disk is available it is placed on the ramdisk.
For environments that boot from a SAN or use Auto Deploy, it is not necessary to allocate a separate LUN for each ESXi host. You can co-locate the scratch regions for many ESXi hosts onto a single LUN. The number of hosts assigned to any single LUN should be weighed against the LUN size and the I/O behavior of the virtual machines.
For best performance and memory optimization, do not leave the /scratch partition configured to use the ramdisk. To reconfigure /scratch, see the Set the Scratch Partition from the vSphere Web Client section in the vSphere Installation and Setup Guide.
Best practices for upgrading or migrating ESXi hosts
For a successful upgrade or migration, perform these best practices:
If your vSphere system includes VMware solutions or plug-ins, ensure that they are compatible with the vCenter Server version that you are upgrading to. For more information, see the VMware Product Interoperability Matrix.
If you are upgrading multiple VMware solutions, review this article, to ensure you update them in the correct order: Update sequence for vSphere 6.0 and its compatible VMware products (2109760).
Read the Before You Install ESXi section in the vSphere Installation and Setup Guide.
Read the VMware vSphere Release Notes for awareness of any known installation issues.
If the ESXi hosts are part of a VMware Virtual SAN cluster, carefully review the Upgrading the Virtual SAN Cluster section in the VMware Virtual SAN 6.0 Documentation.
To prepare your system for the upgrade:
Check if the version of ESXi or ESX you are currently running is supported for migration or upgrade. For more information, see the Supported Upgrades to ESXi 6.0 section in the vSphere Upgrade Guide.
Check the VMware Compatibility Guide to ensure that your host hardware is tested and certified as compatible with the new version of ESXi. Check for system compatibility, I/O compatibility (network and HBA cards), and storage compatibility.
Note:It is not recommended to upgrade a host with hardware that is not certified for use with ESXi 6.0. If your host model is not on the VMware Compatibility guide, VMware recommend you contact your hardware vendor, and check if they plan to support your hardware devices on ESXi 6.0.
Ensure that sufficient disk space is available on the host for the upgrade or migration. VMware recommend a minimum of 50 MB free disk space on the installation disk of the host you are upgrading.
If you use remote management software to interact with your hosts, ensure that the software is supported and the firmware version is sufficient. For more information, see the Supported Remote Management Server Models and Firmware Versions section in the vSphere Upgrade Guide.
If a Fiber Channel SAN is connected to the host, detach the fiber connections before continuing with the upgrade or migration. Do not disable HBA cards in the BIOS.
Ensure you have sufficient access to VMware product licenses to assign a vSphere 6.0 license to the hosts post upgrade. After the upgrade, you can use evaluation mode for 60 days. For more information, see the Applying Licenses After Upgrading to ESXi 6.0 section in the vSphere Upgrade Guide .
Back up the host before performing an upgrade. If the upgrade fails, you can restore the host.
These video series include:
- VMware View: Installing View Connection Server – Overview
- VMware View: Installing View Connection Server – Demo
- VMware View: Installing View Composer – Overview
- VMware View: Installing View Composer – Demo
- VMware View: Creating Floating Linked Clones – Overview
- VMware View: Creating Floating Linked Clones – Overview – Demo
- VMware View: Creating Dedicated Linked Clones – Overview
- VMware View: Creating Dedicated Linked Clones – Demo
- VMware View: Configuring View Connection Server
- VMware View Persona Management: Install and Configure – Overview
- VMware View Persona Management: Install and Configure – Demo
- Installing Windows VMware View Client
- Installing Mac VMware View Client
- Installing iPad VMware View Client
- Optimizing PCoIP GPO Settings – Overview
- Optimizing PCoIP GPO Settings – Demonstration
- Horizon View HTML Blast Client
- Horizon View HTML Blast Client – Demonstration
- Horizon View 5.2 Mac Client Update
- What’s New with Horizon View 5.2, part 1
- What’s New with Horizon View 5.2, part 2
At the end of this video series, you will have a good understanding of the VMware Horizon solution.
These bootcamps include:
■ Introduction To The Software-Defined Data Center
■ Essentials Of The Software-Defined Data Center
■ Automating the Software-Defined Data Center
■ Compute Virtualization
■ Storage Virtualization
■ Network and Security Virtualization – part1
■ Network and Security Virtualization – part2
At the end of this video bootcamp, you will have a good understanding of the VMware Software-Defined Data Center.
This session explores the use cases and applications enabled by GPU-accelerated VDI, as well as the design considerations for deploying virtual desktops with immersive 2D and 3D graphics using Horizon and NVIDIA GRID vGPU. Learn how to set-up your environment for scalable performance and the best user experience possible, with best practices for sizing compute, memory, storage and GPU configuration.
This session discusses how the Horizon vCenter Orchestrator plugin scales the value of Horizon for our customers and partners by enabling automation, self-service by request and approval, and scalable administration across multi-tenant or highly distributed environments. We’ll cover an introduction of the architecture and system requirements, the primary use cases, and walk through the initial configuration steps with some show-n-tell for end users and administrators.
Speaker: Aaron Black, End User Computing Product Line Manager
In this session we will cover Horizon 6 integration points with Virtual SAN as well as the ease, streamline management and implementation capabilities for virtual desktop infrastructures.
Speaker: Rawlinson Rivera, Sr Technical Marketing Architect
The Workspace best practices and architecture session details how to deploy Workspace, best-practices from customer deployments, what to avoid and how to architect for scale.
Speaker: Rasmus Jensen, VMware End User Computing Architect
This session covers the core architecture principles to design a centralized image management solution for physical, virtual, and BYO devices. We’ll review design methodology, sizing and scalability, and integration with Horizon.
Speaker: Stephane Asselin, End User Computing Architect
These bootcamps include:
■ Desktop and Applications Virtualization Best Practices
■ Image Management Architecture Guide with Mirage
■ Workspace Portal Deployment Best Practices
■ Horizon 6 Integration with Virtual SAN
■ Getting Started with Horizon vCenter Orchestrator Plug In
■ High-Performance Graphics for VDI with NVIDIA GRID vGPU
At the end of this video bootcamp, you will have a good understanding of the VMware Horizon solution.
This session includes an overview of the new features in Horizon 6, deployments best practices, conducting assessments, and how to define use cases.
Speaker: Jim Yanik, End User Computing Architect