May 30

Whats new vSphere 6.5 vCenter Server High Availability

This video covers Whats new vSphere 6.5 High Availability.

Rating: 5/5


May 30

What’s New in vSphere 6.5 Migration

This video covers what’s new in vSphere 6.5 migrating from a windows vCenter server to the vCenter Server Appliance 6.5.

Rating: 5/5


May 30

Introduction to the vSphere Client 6.5

This video is an introduction to some new features in the vSphere Client 6.5.

Rating: 5/5


May 30

What’s New in vSphere 6.5 vCenter Server Appliance 6.5 File-Based Backup and Restore

This video covers What’s New in vSphere 6.5 vCenter Server Appliance 6.5 File-Based Backup and Restore.

Rating: 5/5


Mar 14

vSphere Network I/O Control, Version 3

vSphere network I/O control version 3, available in vSphere 6.0, offers granular network resource reservation and allocation across the entire switch.

Rating: 5/5


Mar 12

VMware vSphere Virtual Machine Encryption Performance

Executive Summary

VMware vSphere® virtual machine encryption (VM encryption) is a feature introduced in vSphere 6.5 to enable the encryption of virtual machines. VM encryption provides security to VMDK data by encrypting I/Os from a virtual machine (which has the VM encryption feature enabled) before it gets stored in the VMDK. In this paper, we quantify the impact of using VM encryption on a VM’s I/O performance as well as on some of the VM provisioning operations like VM clone, power-on, and snapshot creation. We show that while VM encryption can lead to bottlenecks in I/O throughput and latency for ultra-high-performance devices (like a high-end NVMe drive) that can support hundreds of thousands of IOPS, for most regular types of storage, like enterprise class SSD or VMware vSAN™, the impact on I/O performance is very minimal.

Introduction

VM encryption supports the encryption of virtual machine files, virtual disk files, and core dump files. Some of the files associated with a virtual machine like log files, VM configuration files, and virtual disk descriptor files are not encrypted. This is because they mostly contain non-sensitive data and operations like disk management should be supported whether or not the underlying disk files are secured. VM encryption uses vSphere APIs for I/O filtering (VAIO), henceforth referred to as IOFilter.

IOFilter is an ESXi framework that allows the interception of VM I/Os in the virtual SCSI emulation (VSCSI) layer. On a high level, the VSCSI layer can be thought of as the layer in ESXi just below the VM and above the VMFS file system. The IOFilter framework enables developers, both VMware and third party vendors, to write filters to implement more services using VM I/Os like encryption, caching, and replication. This framework is implemented entirely in user space. This allows the VM I/Os to be isolated cleanly from the core architecture of ESXi, thereby eliminating any potential issues to the core functionality of the hypervisor. In case of any failure, only the VM in question would be affected. There can be multiple filters enabled for a particular VM or a VMDK, and these filters are typically chained in a manner shown below, so that I/Os are processed by each of these filters serially, one after the other, and then finally either passed down to VMFS or completed within one of the filters. This is illustrated in Figure 1.
IOFilter design

VM Encryption Overview

The primary purpose of VM encryption is to secure the data in VMDKs, such that when the VMDK data is accessed by any unauthorized entity, it gets only meaningless data. The VM that legitimately owns the VMDK has the necessary key to decrypt the data whenever read and then fed to the guest operating system. This is done using industry-standard encryption algorithms to secure this traffic with minimal overhead.
While VM encryption does not impose any new hardware requirements, using a processor that supports the AES-NI instruction set would speed up the encryption/decryption operation. In order to quantify the performance expectations on a traditional server without an AES-NI enabled processor, the results in this paper are from slightly older servers that do not support the AES-NI instruction set.

Design

VM Encryption Components
Figure 2 shows the various components involved as part of the VM encryption mechanism. It consists of an external key management server (KMS), the vCenter Server system, and an ESXi host or hosts. vCenter Server requests keys from an external KMS, which generates and stores the keys and passes them down to vCenter Server for distribution. An important aspect to note is that there is no “per-block hashing” for the virtual disk.

This means, VM encryption provides data protection against snooping and not against data corruption since there is no hash for detecting corruption and recovering from it. For more security, the encryption takes into account not only the encryption key, but also the block’s address. This means two blocks of a VMDK with the same content encrypt to different data.

Key Management

To visualize the mechanism of encryption (and decryption), we need to look at how the various elements in the security policy are laid out topologically. The KMS is the central server in this security-enabled landscape. Figure 3 shows a simplified topology.
Encryption-enabled vCenter Server (VC) topology width=

The KMS is a secure centralized repository of cryptographic keys. There can be more than one KMS configured with a vCenter Server. However, they need to be configured such that only KMSs that replicate keys between themselves (usually from the same vendor) should be added to the same KMS cluster. Otherwise each KMS should be added under a different KMS cluster. One of the KMS clusters must be designated as the default in vCenter Server. Only Key Management Interoperability Protocol (KMIP) v1.1 compliant KMSs are supported and vCenter Server is the client of KMS. Using KMIP enables vCenter Server to talk to any KMIP compliant KMS vendor. Before transacting with the KMS, vCenter Server must establish a trust connection with it, which needs to be done manually.

Download

Download a full VMware vSphere Virtual Machine Encryption Performance vSphere 6.5 Guide.

Rating: 5/5


Mar 11

VMware vSphere Encrypted vMotion Architecture, Performance and Best Practices

Executive Summary

With the rise in popularity of hybrid cloud computing, where VM-sensitive data leaves the traditional IT environment and traverses over the public networks, IT administrators and architects need a simple and secure way to protect critical VM data that traverses across clouds and over long distances.

The Encrypted vMotion feature available in VMware vSphere® 6.5 addresses this challenge by introducing a software approach that provides end-to-end encryption for vMotion network traffic. The feature encrypts all the vMotion data inside the vmkernel by using the most widely used AES-GCM encryption standards, and thereby provides data confidentiality, integrity, and authenticity even if vMotion traffic traverses untrusted network links.

Experiments conducted in the VMware performance labs using industry-standard workloads show the following:

  • vSphere 6.5 Encrypted vMotion performs nearly the same as regular, unencrypted vMotion.
  • The CPU cost of encrypting vMotion traffic is very moderate, thanks to the performance optimizations added to the vSphere 6.5 vMotion code path.
  • vSphere 6.5 Encrypted vMotion provides the proven reliability and performance guarantees of regular, unencrypted vMotion, even across long.

Introduction

VMware vSphere® vMotion® [1] provides the ability to migrate a running virtual machine from one vSphere host to another, with no perceivable impact to the virtual machine’s performance. vMotion brings enormous benefits to administrators—it reduces server downtime and facilitates automatic load-balancing.

During migration, the entire memory and disk state associated with a VM, along with its metadata, are transferred over the vMotion network. It is possible during VM migration for an attacker with sufficient network privileges to compromise a VM by modifying its memory contents during the transit to subvert the VM’s applications or its guest operating system. Due to this possible security risk, VMware highly recommended administrators use an isolated or secured network for vMotion traffic, separate from other datacenter networks such as the management network or provisioning network. This protected the VM’s sensitive data as it traversed over a secure network.

Even though this recommended approach adds slightly higher network and administrative complexity, it works well in a traditional IT environment where the customer owns the complete network infrastructure and can secure it. In a hybrid cloud, however, workloads move dynamically between clouds and datacenters over secured and unsecured network links. Therefore, it is essential to secure sensitive vMotion traffic at the network endpoints. This protects critical VM data even as the vMotion traffic leaves the traditional IT environment and traverses over the public networks.

vSphere 6.5 introduces Encrypted vMotion, which provides end-to-end encryption of vMotion traffic and protects VM data from eavesdropping occurrences on untrusted network links. Encrypted vMotion provides complete confidentiality, integrity, and authenticity of the data transferred over a vMotion network without any requirement for dedicated networks or additional hardware.

The sections that follow describe:

  • vSphere 6.5 Encrypted vMotion technology and architecture
  • How to configure Encrypted vMotion from the vSphere Client
  • Performance implications of encrypting vMotion traffic using real-life workload scenarios
  • Best practices for deployment

Encrypted vMotion Architecture

vMotion uses TCP as the transport protocol for migrating the VM data. To secure VM migration, vSphere 6.5 encrypts all the vMotion traffic, including the TCP payload and vMotion metadata, using the most widely used AES-GCM encryption standard algorithms, provided by the FIPS-certified vmkernel vmkcrypto module.
Workflow Encrypted vMotion

Encryption Protocol

Encrypted vMotion does not rely on the Secure Sockets Layer (SSL) or Internet Protocol Security (IPsec) technologies for securing vMotion traffic. Instead, it implements a custom encrypted protocol above the TCP layer. This is done primarily for performance, but also for reasons explained below.
SSL is compute intensive and completely implemented in user space, while vMotion, which constitutes core ESXi, executes in kernel space. This means, if vMotion were to rely on SSL, each encryption/decryption call would need to traverse across kernel and user spaces, thereby resulting in excessive performance overhead. Using the encryption algorithms provided by the vmkernel vmkcrypto module enables vMotion to avoid such a performance penalty.

Although IPSec can be used to secure vMotion traffic, its usability is limited in the vSphere environment because ESXi hosts support IPsec only for IPv6 traffic, but not for IPv4 traffic. Besides, implementing a custom protocol above the TCP layer gives vMotion the ability to create the appropriate number of vMotion worker threads, and coordinate efficiently among them to spread the encryption/decryption CPU load across multiple cores.

Download

Download a full VMware vSphere Encrypted vMotion Architecture, Performance and Best Practices Study.

Rating: 5/5


Mar 11

DRS Performance in VMware vSphere 6.5

Introduction

VMware vSphere® Distributed Resource Scheduler™ (DRS) is more than a decade old and is constantly innovating with every new version. In vSphere 6.5, DRS comes with many new features and performance improvements to ensure more efficient load balancing and VM placement, faster response times, and simplified cluster management.
In this paper, we cover some of the key features and performance improvements to highlight the more efficient, faster, and lighter DRS in vSphere 6.5.

New Features

Predictive DRS
Historically, vSphere DRS has been reactive—it reacts to any changes in VM workloads and migrates the VMs to distribute load across different hosts. In vSphere 6.5, with VMware vCenter Server® working together with VMware vRealize® Operations™ (vROps), DRS can act upon predicted future changes in workloads. This helps DRS migrate VMs proactively and makes room in the cluster to accommodate future workload demand.
For example, if your VMs’ workload is going to spike at 9am every day, predictive DRS will be able to detect this pattern before-hand based on historical data from vROPs, and can prepare the cluster resources by using either of the following techniques:

  • Migrating the VMs to different hosts to accommodate the future workload and avoid host over-commitment
  • Bringing back a new host from stand-by mode using VMware vSphere® Distributed Power Management™ (DPM) to accommodate the future demand

How It Works

To enable predictive DRS, you need to link vCenter Server to a vROps instance (that supports predictive DRS), which monitors the resource usage pattern of VMs and generates predictions. Once vROps starts monitoring VM workloads, it generates predictions after a specified learning period. The generated predictions are then provided to vCenter Server for DRS to consume.

Once the VMs’ workload predictions are available, DRS evaluates the demand of a VM based on its current resource usage and predicted future resource usage.

    Demand of a VM = Max (current usage, predicted future usage)

Considering the maximum of current and future resource usage ensures that DRS does not clip any VM’s current demand in favor of its future demand. For the VMs which do not have predictions, DRS computes resource
demand based on only the current resource usage.

Look Ahead Interval

The predictions that DRS gets from vROps are always for a certain period of time, starting from the current
time. This period is known as the “look ahead interval” for predictive DRS. This is by default 60 minutes starting from the current time, which means, by default the predictions will always be for the next one hour. So if there is any sudden spike that is going to happen in the next one hour, predictive DRS will detect it and will prepare the cluster to handle it.

Network-Aware DRS

Traditionally, DRS has always considered the compute resource (CPU and memory) utilizations of hosts and VMs for balancing load across hosts and placing VMs during power-on. This generally works well because in many cases, CPU and memory are the most important resources needed for good application performance.
However, since network availability is not considered in this approach, sometimes this results in placing or
migrating a VM to a host which is already network saturated. This might have some performance impact on the application if it happens to be network sensitive.
DRS is network-aware in vSphere 6.5, so it now considers the network utilization of host and network usage requirements of VMs during initial placement and load balancing. This makes DRS load balancing and initial placement of VMs more effective.

How It Works

During initial placement and load balancing, DRS first comes up with the list of best possible hosts to run a VM based on compute resources and then uses some heuristics to decide the final host based on VM and host network utilization’s. This makes sure the VM gets the network resources it needs along with the compute resources.

The goal of network-aware DRS in vSphere 6.5 is only to make sure the host has sufficient network resources available along with compute resources required by the VM. So, unlike regular DRS, which balances the CPU and memory load, network-aware DRS does not balance the network load in the cluster, which means it will not trigger a vMotion when there is network load imbalance.

Download

Download a full DRS PERFORMANCE in VMware vSphere 6.5 Study Guide.

Rating: 5/5


Dec 14

Configuration Maximum changes from vSphere 6.0 to vSphere 6.5

vSphere 6.5 is now available and with every release VMware makes changes to the configuration maximums for vSphere. Since VMware never highlights what has changed between releases in their official Configuration Maximum 6.5 documentation and compare the document with the vSphere 6.0 Configuration Maximums. The changes between the versions are here.

Configuration Sphere 6.5 vSphere 6.0

Virtual Machines Maximums

RAM per VM 6128GB 4080GB
Virtual NVMe adapters per VM 4 N/A
Virtual NVMe targets per virtual SCSI adapter 15 N/A
Virtual NVMe targets per VM 60 N/A
Virtual RDMA Adapters per VM 1 N/A
Video memory per VM 2GB 512MB

ESXi Host Maximums

Logical CPUs per host 576 480
RAM per host 12TB 6TB *some exceptions
LUNs per server 512 256
Number of total paths on a server 2048 1024
FC LUNs per host 512 256
LUN ID 0 to 16383 0 to 1023
VMFS Volumes per host 512 256
FT virtual machines per cluster 128 98

vCenter Maximum

Hosts per vCenter Server 2000 1000
Powered-on VMs per vCenter Server 25000 10000
Registered VMs per vCenter Server 35000 15000
Number of host per datacenter 2000 500
Maximum mixed vSphere Client (HTML5) + vSphere Web
Client simultaneous connections per VC
60 (30 Flex, 30 maximum HTML5) N/A
Maximum supported inventory for vSphere Client
(HTML5)
10,000 VMs, 1,000 Hosts N/A
Host Profile Datastores 256 120
Host Profile Created 500 1200
Host Profile Attached 500 1000

Platform Services Controller Maximums

Maximum PSCs per vSphere Domain 10 8

vCenter Server Extensions Maximums

[VUM] VMware Tools upgrade per ESXi host 30 24
[VUM] Virtual machine hardware upgrade per host 30 24
[VUM] VMware Tools scan per VUM server 200 90
[VUM] VMware Tools upgrade per VUM server 200 75
[VUM] Virtual machine hardware scan per VUM server 200 90
[VUM] Virtual machine hardware upgrade per VUM server 200 75
[VUM] ESXi host scan per VUM server 232 75
[VUM] ESXi host patch remediation per VUM server 232 71
[VUM] ESXi host upgrade per VUM server 232 71

Virtual SAN Maximums

Virtual machines per cluster 6000 6400
Number of iSCSI LUNs per Cluster 1024 N/A
Number of iSCSI Targets per Cluster 128 N/A
Number of iSCSI LUNs per Target 256 N/A
Max iSCSI LUN size 62TB N/A
Number of iSCSI sessions per Node 1024 N/A
iSCSI IO queue depth per Node 4096 N/A
Number of outstanding writes per iSCSI LUN 128 N/A
Number of outstanding IOs per iSCSI LUN 256 N/A
Number of initiators who register PR key for a iSCSI LUN 64 N/A

Storage Policy Maximums

Maximum number of VM storage policies 1024 Not Published
Maximum number of VASA providers 1024 Not Published
Maximum number of rule sets in VM storage
policy
16 N/A
Maximum capabilities in VM storage policy
rule set
64 N/A
Maximum vSphere tags in virtual machine storage policy 128 Not Published

Download

Download a full VMware vSphere 6.5 Configuration Maximums.
Download a full VMware vSphere 6.0 Configuration Maximums.

Rating: 5/5


Dec 09

What’s New in vSphere 6.5: vSphere Integrated Containers

Massimo Re Ferre posted December 9, 2016

Last year we introduced Project Bonneville. The idea behind it, at the high level, is that there is a strong parallel between the constructs Docker uses inside a Linux Docker host and the constructs ESXi uses as a hypervisor. In the final analysis what project Bonneville allowed you to do is to run a docker image as a VM on top of a hypervisor (as opposed to just as a container on top of a Linux host). This has the intrinsic advantage that you can operationalize Docker with the constructs you know and love.

One of the biggest problems IT is facing right now is that their internal customers are asking for “big Linux VMs” only to find out weeks later that they have deployed containerized applications inside those instances. IT has no idea of how to manage, monitor and secure those applications. The Bonneville approach fixes this problem by instantiating those applications as separate virtual machines. Maybe not cool, but very useful.

Fast forward 18 months, we are releasing (and fully supporting**) these technologies as part of vSphere.

Enterprise Plus customers have now the option of leveraging a feature of vSphere called vSphere Integrated Containers (VIC for short).

vSphere Integrated Containers is comprised of three different technologies. What makes them unique is that they are all open source. This means that you can just “consume” what we are building or you can also contribute (if you wish so) features that you may deem as necessary for your particular use case. These three technologies are discussed below.

Note that there is a video at the end of this post that will show these technologies in action. In the meanwhile, this is a 33.000 high level diagram of how these technologies relate to each other:

WhatsNewinvSphere65-VIC-1-768x379.png

VIC Engine

This a complete rebase of project Bonneville. When the engineering team was tasked with the need to productize Bonneville they decided to re-write it and include a so called Portlayer. The Portlayer is an interface that exposes vSphere objects and services as containers primitives. On top of Portlayer you can have multiple different personalities. As part of the first announcement we have created a Docker personality (think about VIC Engine today as a Docker “façade” on top of vSphere).

The way you create this “façade” is pretty straightforward: as a vSphere admin you will use a tool called vic-machine (which is part of the VIC Engine binary) to deploy a Virtual Container Host (a vApp) on top of vSphere.

Inside the Virtual Container Host there is a small VM that acts as the Docker Endpoint. The IP of that VM is what the vSphere admin will hand over to the internal customers that need Docker. When the customer run “docker run –H busybox” the busybox docker image will be pulled from Docker Hub and it will be instantiated as a VM inside the Virtual Container Host vApp.

The VIC Engine Github repo is located here.

Harbor

While one could see VIC Engine as being the core component of vSphere Integrated Containers, we soon realized that Enterprise customers were asking for more. Hence we decided to create a product that would do more than just mimic the behavior of a compatible Docker Engine.

For this reason, vSphere Integrated Containers also ships Harbor, an Enterprise Docker registry. For vSphere Integrated Containers deployments we have bundled it as a virtual appliance in OVA format. vSphere admins will grab the appliance and import it into the vSphere environment.

vSphere admins can then hand off its FQDN or IP address to their internal customers. They can then use the registry service provided by Harbor as a secure Docker registry instantiated inside the data center. Not only they will continue to push and pull to and from Docker Hub, but they now have the possibility to push and pull to and from a local registry.

Harbor is built on top of the open source Docker registry foundation and we added features that most Enterprise customer are asking for: LDAP/AD support, role based access control, a user interface and image replication to name a few.

If you are interested in understanding more about the internals of Harbor this is a good blog post from the engineering team that gets into some of the details.

This is the public Harbor repo on Github. For people that are interested in joining the Harbor community (as opposed to just use it as part of the supported vSphere Integrated Containers product), feel free to interact directly with the engineering team over there and/or submit PRs.

Admiral

Admiral is an extension of vRealize Automation 7.2 and it adds container support to vRealize Automation. You can find additional information about it here.

However, given Admiral has been developed independently and can be instantiated standalone, VMware decided to add Admiral to the vSphere Integrated Containers product.

Given that with VIC Engine we are leveraging the very robust vSphere features to schedule “ContainerVMs” on top of hypervisor hosts, we are not leveraging all the capabilities that Admiral provides in a scenario where you are using Linux Docker hosts on top of which you instantiate containers. However, we leverage a lot of Admiral features in the context of vSphere Integrated Containers including providing a user interface for Virtual Container Hosts consumption and the capability of composing multi-container applications to be deployed as a single entity.

You can access the public Admiral Github repo here. As a reminder, Admiral is still considered Beta as part of vSphere Integrated Containers.

See vSphere Integrated Containers in action

Now that we talked about the technologies that comprise vSphere Integrated Conatiners, it is time to see them in action. This video shows how to use together the three technologies discussed above.

** Admiral has not been GAed yet so support for Admiral, as part of vSphere Integrated Container, is limited to the level of support we provide for Beta software.

Rating: 5/5