Mar 12

VMware vSphere Virtual Machine Encryption Performance

Executive Summary

VMware vSphere® virtual machine encryption (VM encryption) is a feature introduced in vSphere 6.5 to enable the encryption of virtual machines. VM encryption provides security to VMDK data by encrypting I/Os from a virtual machine (which has the VM encryption feature enabled) before it gets stored in the VMDK. In this paper, we quantify the impact of using VM encryption on a VM’s I/O performance as well as on some of the VM provisioning operations like VM clone, power-on, and snapshot creation. We show that while VM encryption can lead to bottlenecks in I/O throughput and latency for ultra-high-performance devices (like a high-end NVMe drive) that can support hundreds of thousands of IOPS, for most regular types of storage, like enterprise class SSD or VMware vSAN™, the impact on I/O performance is very minimal.

Introduction

VM encryption supports the encryption of virtual machine files, virtual disk files, and core dump files. Some of the files associated with a virtual machine like log files, VM configuration files, and virtual disk descriptor files are not encrypted. This is because they mostly contain non-sensitive data and operations like disk management should be supported whether or not the underlying disk files are secured. VM encryption uses vSphere APIs for I/O filtering (VAIO), henceforth referred to as IOFilter.

IOFilter is an ESXi framework that allows the interception of VM I/Os in the virtual SCSI emulation (VSCSI) layer. On a high level, the VSCSI layer can be thought of as the layer in ESXi just below the VM and above the VMFS file system. The IOFilter framework enables developers, both VMware and third party vendors, to write filters to implement more services using VM I/Os like encryption, caching, and replication. This framework is implemented entirely in user space. This allows the VM I/Os to be isolated cleanly from the core architecture of ESXi, thereby eliminating any potential issues to the core functionality of the hypervisor. In case of any failure, only the VM in question would be affected. There can be multiple filters enabled for a particular VM or a VMDK, and these filters are typically chained in a manner shown below, so that I/Os are processed by each of these filters serially, one after the other, and then finally either passed down to VMFS or completed within one of the filters. This is illustrated in Figure 1.
IOFilter design

VM Encryption Overview

The primary purpose of VM encryption is to secure the data in VMDKs, such that when the VMDK data is accessed by any unauthorized entity, it gets only meaningless data. The VM that legitimately owns the VMDK has the necessary key to decrypt the data whenever read and then fed to the guest operating system. This is done using industry-standard encryption algorithms to secure this traffic with minimal overhead.
While VM encryption does not impose any new hardware requirements, using a processor that supports the AES-NI instruction set would speed up the encryption/decryption operation. In order to quantify the performance expectations on a traditional server without an AES-NI enabled processor, the results in this paper are from slightly older servers that do not support the AES-NI instruction set.

Design

VM Encryption Components
Figure 2 shows the various components involved as part of the VM encryption mechanism. It consists of an external key management server (KMS), the vCenter Server system, and an ESXi host or hosts. vCenter Server requests keys from an external KMS, which generates and stores the keys and passes them down to vCenter Server for distribution. An important aspect to note is that there is no “per-block hashing” for the virtual disk.

This means, VM encryption provides data protection against snooping and not against data corruption since there is no hash for detecting corruption and recovering from it. For more security, the encryption takes into account not only the encryption key, but also the block’s address. This means two blocks of a VMDK with the same content encrypt to different data.

Key Management

To visualize the mechanism of encryption (and decryption), we need to look at how the various elements in the security policy are laid out topologically. The KMS is the central server in this security-enabled landscape. Figure 3 shows a simplified topology.
Encryption-enabled vCenter Server (VC) topology width=

The KMS is a secure centralized repository of cryptographic keys. There can be more than one KMS configured with a vCenter Server. However, they need to be configured such that only KMSs that replicate keys between themselves (usually from the same vendor) should be added to the same KMS cluster. Otherwise each KMS should be added under a different KMS cluster. One of the KMS clusters must be designated as the default in vCenter Server. Only Key Management Interoperability Protocol (KMIP) v1.1 compliant KMSs are supported and vCenter Server is the client of KMS. Using KMIP enables vCenter Server to talk to any KMIP compliant KMS vendor. Before transacting with the KMS, vCenter Server must establish a trust connection with it, which needs to be done manually.

Download

Download a full VMware vSphere Virtual Machine Encryption Performance vSphere 6.5 Guide.

Rating: 5/5


Oct 15

Installing and Configuring VMware vRealize Orchestrator v7.1

Installing and Configuring VMware vRealize Orchestrator provides information and instructions about
installing, upgrading and confguring VMware® vRealize Orchestrator.

Intended Audience

This information is intended for advanced vSphere administrators and experienced system administrators
who are familiar with virtual machine technology and datacenter operations.

Introduction to VMware vRealize Orchestrator

VMware vRealize Orchestrator is a development- and process-automation platform that provides a library
of extensible workflows to allow you to create and run automated, confgurable processes to manage VMware products as well as other third-party technologies.
vRealize Orchestrator automates management and operational tasks of both VMware and third-party applications such as service desks, change management systems, and IT asset management systems.
This chapter includes the following topics:

  • “Key Features of the Orchestrator Platform,” on page 9
  • “Orchestrator User Types and Related Responsibilities,” on page 11
  • “Orchestrator Architecture,” on page 11
  • “Orchestrator Plug-Ins,” on page 12

Key Features of the Orchestrator Platform

Orchestrator is composed of three distinct layers: an orchestration platform that provides the common
features required for an orchestration tool, a plug-in architecture to integrate control of subsystems, and a library of workflowsǯ Orchestrator is an open platform that can be extended with new plug-ins and libraries, and can be integrated into larger architectures through a REST API.
The following list presents the key Orchestrator features:

    Persistence – Production grade databases are used to store relevant information, such as processes, workflow states, and confguration information.

    Central management – Orchestrator provides a central way to manage your processes. The application server-based platform, with full version history, can store scripts and process-related primitives in the same storage location. This way, you can avoid scripts without versioning and proper change control on your servers.

    Check-pointing – Every step of a workflow is saved in the database, which prevents data-loss if you must restart the server. This feature is especially useful for long-running.

    Control Center – The Control Center interface increases the administrative efciency of vRealize Orchestrator instances by providing a centralized administrative interface for runtime operations, workflow monitoring, unifed log access and confgurationsǰ and correlation between the workflow runs and system resources. The vRealize Orchestrator logging mechanism is optimized with an additional log fle that gathers various performance metrics for vRealize Orchestrator engine throughput. processes.

    Versioning – All Orchestrator Platform objects have an associated version history. Version history is useful for basic change management when distributing processes to project stages or locations.

    Scripting engine – The Mozilla Rhino JavaScript engine provides a way to create building
    blocks for Orchestrator Platform. The scripting engine is enhanced with basic version control, variable type checking, name space management, and exception handling. The engine can be used in the following building blocks:

    • Actions
    • Workflows
    • Policies

    Workflow engine – The workflow engine allows you to automate business processes. It uses the following objects to create a step-by-step process automation in workflows:

    • Workflows and actions that Orchestrator provides
    • Custom building blocks created by the customer
    • Objects that plug-ins add to Orchestrator

    Policy engine – You can use the policy engine to monitor and generate events to react to changing conditions in the Orchestrator server or plugged-in technology. Policies can aggregate events from the platform or any of the plug-ins, which helps you to handle changing conditions on any of the integrated technologies.

    Security – Orchestrator provides the following advanced security functions:

    • Public Key Infrastructure (PKI) to sign and encrypt content imported and exported between servers.
    • Digital Rights Management (DRM) to control how exported content can be viewed, edited, and redistributed.
    • Secure Sockets Layer (SSL) to provide encrypted communications between the desktop client and the server and HTTPS access to the Web front end. Advanced access rights management to provide control over access to processes and the objects manipulated by these processes.

    Encryption – vRealize Orchestrator uses a FIPS-compliant Advanced Encryption Standard (AES) with a 256-bit cipher key for encryption of strings. The cipher key is randomly generated and is unique across appliances that are not part of a cluster. All nodes in a cluster share the same cipher key.

Orchestrator User Types and Related Responsibilities

Orchestrator provides different tools and interfaces based on the specifc responsibilities of the global user roles. In Orchestrator, you can have users with full rights, that are a part of the administrator group (Administrators) and users with limited rights, that are not part of the administrator group (End Users).

Users with Full Rights

– Orchestrator administrators and developers have equal administrative rights, but are divided in terms of responsibilities.

    Administrators – This role has full access to all of the Orchestrator platform capabilities. Basic administrative responsibilities include the following items:

    • Installing and confguring Orchestrator
    • Managing access rights for Orchestrator and applications
    • mporting and exporting packages
    • Running workflows and scheduling tasks
    • Managing version control of imported elements
    • Creating new workflows and plug-ins

    Developers – This user type has full access to all of the Orchestrator platform capabilities.
    Developers are granted access to the Orchestrator client interface and have the following responsibilities:

    • Creating applications to extend the Orchestrator platform functionality
    • Automating processes by customizing existing workflows and creating new workflows and plug-ins.

Users with Limited Rights

    End Users – End users can run and schedule workflows and policies that the
    administrators or developers make available in the Orchestrator client

Orchestrator Architecture

Orchestrator contains a workflow library and a workflow engine to allow you to create and run workflows that automate orchestration processes. You run workflows on the objects of different technologies that Orchestrator accesses through a series of plug-ins.

Orchestrator provides a standard set of plug-ins, including a plug-in for vCenter Server, to allow you to
orchestrate tasks in the different environments that the plug-ins expose.

Orchestrator also presents an open architecture to allow you to plug in external third-party applications to the orchestration platform. You can run workflows on the objects of the plugged-in technologies that you
defne yourself. Orchestrator connects to an authentication provider to manage user accounts, and to a
database to store information from the workflows that it runs. You can access Orchestrator, the Orchestrator workflows and the objects it exposes through the Orchestrator client interface, or through Web services.

vRealize Orchestrator Architecture

vRealize Orchestrator Architecture

Download

Download a full Installing and Configuring VMware vRealize Orchestrator v7.1 .

Rating: 5/5


Oct 06

VMware® Virtual SAN™ Stretched Cluster – Bandwidth Sizing Guidance

TECHNICAL WHITE PAPER

Overview

The purpose of this document is to explain how to size bandwidth requirements for Virtual SAN in Stretched Cluster configurations. This document only covers the Virtual SAN network bandwidth requirements.

In Stretched Cluster configurations, two data fault domains have one or more hosts, and the third fault domain contains a witness host or witness appliance. In this document each data fault domain will be referred to as a site.

Virtual SAN Stretched Cluster configurations can be spread across distances, provided bandwidth and latency requirements are met.

H5

Streched Cluster Configuration

General Guidelines

The bandwidth requirement between the main sites is highly dependent on the workload to be run on Virtual SAN, amount of data, and handling of failure scenarios. Under normal operating conditions, the basic bandwidth requirements are:

H5

Basic bandwidth requirements

Bandwidth Requirements Between Sites

Workloads are seldom all reads or writes, and normally include a general read to write ratio for each use case.

A good example of this would be a VDI workload. During peak utilization, VDI often behaves with a 70/30 write to read ratio. That is to say that 70% of the IO is due to write operations and 30% is due to read IO. As each solution has many factors, true ratios should be measured for each workload.

Using the general situation where a total IO profile requires 100,000 IOPS, of which 70% are write, and 30% are read, in a Stretched configuration, the write IO is what is sized against for inter-site bandwidth requirements.

With Stretched Clusters, read traffic is, by default, serviced by the site that the VM resides on. This concept is called Read Locality.

The required bandwidth between two data sites (B) is equal to Write bandwidth (Wb) * data multiplier (md) * resynchronization multiplier (mr):

B = Wb * md * mr

The data multiplier is comprised of overhead for Virtual SAN metadata traffic and miscellaneous related operations. VMware recommends a data multiplier of 1.4
The resynchronization multiplier is included to account for resynchronizing events. It is recommended to allocate bandwidth capacity on top of required bandwidth capacity for resynchronization events.

Making room for resynchronization traffic, an additional 25% is recommended.

Bandwidth Requirements Between Witness & Data Sites

Witness bandwidth isn’t calculated in the same way as inter-site bandwidth requirements. Witnesses do not maintain VM data, but rather only component metadata.
It is important to remember that data is stored on Virtual SAN in the form of objects. Objects are comprised of 1 or more components of items such as:

  • VM Home or namespace
  • VM Swap object
  • Virtual Disks
  • SnapshotsM

Objects can be split into more than 1 component when the size is >255GB, and/or a Number of Stripes (stripe width) policy is applied. Additionally, the number of objects/components for a given Virtual Machine is multiplied
when a Number of Failures to Tolerate (FTT) policy is applied for data protection and availability.

The required bandwidth between the Witness and each site is equal to ~1138 B x Number of Components / 5s.

Download

Download a full VMware® Virtual SAN™ Stretched Cluster – Bandwidth Sizing Guidance technical white paper.

Rating: 5/5


Jun 13

Deploying a Centralized VMware vCenter™ Single Sign-On™ Server with a Network Load Balancer

Overview

With the release of VMware vSphere® 5.5 and VMware® vCenter Server™ 5.5, multiple components deliver the vCenter Server management solution. One component, VMware vCenter™ Single Sign-On™ server, offers an optional deployment configuration that enables the centralization of vCenter Single Sign-On services for multiple local solutions such as vCenter Server. If not architected correctly, centralization can increase risk, so use of vCenter Single Sign-On server is highly recommended.

This paper highlights the high-availability options for a centralized vCenter Single Sign-On environment and provides a reference guide for deploying one of the more common centralized vCenter Single Sign-On configurations with an external network load balancer (NLB).

When to Centralize vCenter Single Sign-On Server

VMware highly recommends deploying all vCenter Server components into a single virtual machine—excluding the vCenter Server database. However, large enterprise customers running many vCenter Server instances within a single physical location can simplify vCenter Single Sign-On architecture and management by reducing the footprint and required resources and specifying a dedicated vCenter Single Sign-On environment for all resources in each physical location.

For vSphere 5.5, as a general guideline, VMware recommends centralization of vCenter Single Sign-On server when eight or more vCenter Server instances are present in a given location.

A Centralized vCenter Single Sign-On Server Environment

Figure 1 – A Centralized vCenter Single Sign-On Server Environment.

Centralized Single Sign-On High-Availability Options

The absence of vCenter Single Sign-On server greatly impacts the management, accessibility, and operations within a vSphere environment. The type of availability required is based on the user’s recovery time objective (RTO), and VMware solutions can offer various levels of protection.

VMware vSphere Data Protection

VMware vSphere Data Protection™ provides a disk-level backup-and-restore capability utilizing storage-based snapshots. With the release of vSphere Data Protection 5.5, VMware now provides the option of host-level restore. Users can back up vCenter Single Sign-On server virtual machines using vSphere Data Protection and can restore later as necessary to a specified vSphere host.

VMware vSphere High Availability

When deploying a centralized vCenter Single Sign-On server to a vSphere virtual machine environment, users can also deploy VMware vSphere High Availability (vSphere HA) to enable recovery of the vCenter Single Sign-On server virtual machines. vSphere HA monitors virtual machines via heartbeats from the VMware Tools™ package, and it can initiate a reboot of the virtual machine when the heartbeat no longer is being received or when the vSphere host has failed.

VMware vCenter Server Heartbeat

VMware vCenter Server Heartbeat™ provides a richer availability model for the monitoring and redundancy of vCenter Server and its components. It places a centralized vCenter Single Sign-On server into an active–passive architecture, monitors the application, and provides an up-to-date passive node for recovery during a vSphere host, virtual machine, or application failure.

Network Load Balancer

A VMware or third-party NLB can be configured to allow SSL pass-through communications to a number of local vCenter Single Sign-On server instances and provide a distributed and redundant vCenter Single Sign-On solution. Although VMware provides NLB capability in some of its optional products, such as VMware vCloud® Networking and Security™, there also are third-party solutions available in the marketplace. VMware does not provide support for third-party NLB solutions.

Deploying vCenter Single Sign-On Server with a Network Load Balancer

Preinstallation Checklist

The guidance provided within this document will reference the following details:

Centralized vCenter Single Sign-On Requirements

Table 1 – Centralized vCenter Single Sign-On Requirements

vCenter Single Sign-On Server with a Network Load Balancer

Figure 2 – Example of a vCenter Single Sign-On Server with a Network Load Balancer

Download

Download a full Deploying a Centralized VMware vCenter™ Single Sign-On™ Server with a Network Load Balancer – Technical Reference

Rating: 5/5


Jun 13

NSX Distributed Firewalling Policy Rules Configuration Guide

Created by nikhilvmw on Sep 23, 2014 5:16 PM. Last modified by nikhilvmw on Nov 6, 2014 2:19 PM.
VMware NSX for vSphere, release 6.0.x.

This document covers how one can create security policy rules in VMware NSX. This will cover the different options of configuring security rules either through the Distributed Firewall or via the Service Composer User Interface. It will cover all the unique options NSX offers to create dynamic policies based on the infrastructure context.

Thanks to Francis Guillier, Kausum Kumar and Srini Nimmagadda for helping author this document.
Regards,
NSX Team

Introduction

VMware NSX Distributed Firewall (DFW) provides the capability to enforce firewalling functionality directly at the Virtual Machines (VM) vNIC layer. It is a core component of the micro-segmentation security model where east-west traffic can now be inspected at near line rate processing, preventing any lateral move type of attack.

This technical brief gives details about DFW policy rule configuration with NSX. Both DFW security policy objects and DFW consumption model will be discussed in this document.

We assume reader has already some knowledge on DFW and Service Composer functions. Please refer to the appropriate collateral if you need more information on these NSX components.

Distributed Firewall Object Grouping Model

NSX provides the capability to micro-segment your SDDC to provide an effective security posture. To implement micro-segmentation in your SDDC, NSX provides you various ways of grouping VMs and applying security policies to them. This document specifies in detail different ways groupings can be done and details on when you should use one over the other.
Security policy rules can be written in various ways as shown below:

Network Based Policies:

    This is the traditional approach of grouping based on L2 or L3 elements. Grouping can be based on MAC addresses or IP addresses or a combination of both. NSX supports this approach of grouping objects. The security team needs to aware of networking infrastructure to deploy network-based policies. There is a high probability of security rule sprawl as grouping based on dynamic attributes is not used. This method of grouping works great if you are migrating existing rules from a different vendor’s firewall.

Network Based Policies

When not to use this: In dynamic environments, e.g. Self-Service IT; Cloud automated deployments, where you are adding/deleting of VMs and application topologies at a rapid rate, MAC addressed based grouping approach may not be suitable as there will be delay between provisioning a VM and adding the MAC addresses to the group. If you have an environment with high mobility like vMotion and HA, L3/IP based grouping approaches may not be adequate either.

Infrastructure Based Policies:

    In this approach, grouping is based on SDDC infrastructure like vCenter clusters, logical switches, distributed port groups, etc. An example of this would be, clusters 1 to cluster 4
    are earmarked for PCI kind of applications. In such a case, grouping can be done based on cluster names and rules can be enforced based on these groups. Another example would be, if you know which logical switches in your environment are connected to which applications. E.g. App Tier Logical switch contains all VMs pertaining to application ‘X’. The security team needs to work closely with the vCenter administration team to understand logical and physical boundaries.

    When not to use this: If there are no physical or logical boundaries in your SDDC environment then this type of approach is not suitable. Also, you need to be very careful where you can deploy your applications. For example, if you would like to deploy a PCI workload to any cluster that has adequate compute resources available; the security posture cannot be tied to a cluster but should move with the application.

Application Based Policies:

    In this approach, grouping is based on the application type (e.g: VMs tagged as “Web_Servers”), application environment (e.g: all resources tagged as “Production_Zone”) and application security posture. The advantage of this approach is that the security posture of the application is not tied down to either network constructs or SDDC infrastructure. Security policies can move with the application irrespective of network or infrastructure boundaries. Policies can be templated and reusable across instances of same types of applications and workloads. You can use variety of mechanisms to group. The security team needs to be aware of only the application that it is trying to secure based on the policies. The security policies follow the application life cycle, i.e. comes alive when the application is deployed and is destroyed when the application is decommissioned.

    When not to use this: If the environment is pretty static without mobility and infrastructure functions are properly demarcated. You do not need to use application-based policies.

    Application-based policy approach will greatly aid in moving towards a Self-Service IT model. The Security team needs to be only aware of how to secure an application without knowing the underlying topology. Concise and reusable security rules will require application awareness. Thus a proper security posture can be developed via application based policies.

NSX Security-Groups

Security-Groups is a container-construct which allows to group vCenter objects into a common entity.
When defining a Security-Groups, multiple inclusion and exclusion can be used as shown in the diagram below:

NSX Security Groups

Download

Download a full VMware NSX DFW Policy Rules Configuration Technical White Paper

Rating: 5/5


Jun 10

VMware vCenter Server™ 6.0 Deployment Guide

Introduction

The VMware vCenter Server™ 6.0 release introduces new, simplified deployment models. The components that make up a vCenter Server installation have been grouped into two types: embedded and external. Embedded refers to a deployment in which all components—this can but does not necessarily include the database—are installed on the same virtual machine. External refers to a deployment in which vCenter Server is installed on one virtual machine and the Platform Services Controller (PSC) is installed on another. The Platform Services Controller is new to vCenter Server 6.0 and comprises VMware vCenter™ Single Sign-On™, licensing, and the VMware Certificate Authority (VMCA).

Embedded installations are recommended for standalone environments in which there is only one vCenter Server system and replication to another Platform Services Controller is not required. If there is a need to replicate with other Platform Services Controllers or there is more than one vCenter Single Sign-On enabled solution, deploying the Platform Services Controller(s) on separate virtual machine(s)—via external deployment—from vCenter Server is required.

This paper defines the services installed as part of each deployment model, recommended deployment models (reference architectures), installation and upgrade instructions for each reference architecture, postdeployment steps, and certificate management in VMware vSphere 6.0.

VMware vCenter Server 6.0 Services

vCenter Server and Platform Services Controller Services

Figure 1 – vCenter Server and Platform Services Controller Services

Requirements

General
A few requirements are common to both installing vCenter Server on Microsoft Windows and deploying VMware vCenter Server Appliance™. Ensure that all of these prerequisites are in place before proceeding with a new installation or an upgrade.

  • DNS – Ensure that resolution is working for all system names via fully qualified domain name (FQDN), short name (host name), and IP address (reverse lookup).
  • Time – Ensure that time is synchronized across the environment.
  • Passwords – vCenter Single Sign-On passwords must contain only ASCII characters; non-ASCII and extended (or high) ASCII characters are not supported.

Windows Installation

Installing vCenter Server 6.0 on a Windows Server requires a Windows 2008 SP2 or higher 64-bit operating system (OS). Two options are presented: Use the local system account or use a Windows domain account. With a Windows domain account, ensure that it is a member of the local computer’s administrator group and that it has been delegated the “Log on as a service” right and the “Act as part of the operating system” right. This option is not available when installing an external Platform Services Controller.

Windows installations can use either a supported external database or a local PostgreSQL database that is installed with vCenter Server and is limited to 20 hosts and 200 virtual machines. Supported external databases include Microsoft SQL Server 2008 R2, SQL Server 2012, SQL Server 2014, Oracle Database 11g, and Oracle Database 12c. When upgrading to vCenter Server 6.0, if SQL Server Express was used in the previous installation, it will be replaced with PostgreSQL. External databases require a 64-bit DSN. DSN aliases are not supported.

When upgrading vCenter Server to vCenter Server 6.0, only versions 5.0 and later are supported. If the vCenter Server system being upgraded is not version 5.0 or later, such an upgrade is required first.

Table 2 outlines minimum hardware requirements per deployment environment type and size when using an external database. If VMware vSphere Update Manager™ is installed on the same server, add 125GB of disk space and 4GB of RAM.

Minimum Hardware Requirements – Windows Installation

Table 2. Minimum Hardware Requirements – Windows Installation

Download

Download a full VMware vCenter Server™ 6.0 Deployment Guide

Rating: 5/5


May 17

Virtual SAN Compatibility Guide

VMware Virtual SAN Ready Nodes

The purpose of this document is to provide VMware Virtual SAN™ Ready Node configurations from OEM vendors. Virtual SAN Ready Node is a validated server configuration in a tested, certified hardware form factor for Virtual SAN deployment,jointly recommended by the server OEM and VMware. Virtual SAN Ready Nodes are ideal as hyper-converged building blocks for larger data center environments looking for automation and a need to customize hardware and software configurations.

Virtual SAN Ready Node is a turnkey solution for accelerating Virtual SAN deployment with following benefits:

1. Complete server configurations jointly recommended by the server OEM and VMware

  • Complete with the size, type and quantity of CPU, Memory, Network, I/O Controller, HDD and SSD combined with a certified server that is best suited to run a specific Virtual SAN workload
  • Most Ready Nodes come pre-loaded with vSphere and Virtual SAN if user decides to quote/order as-is

2. Easy to order and faster time to market

  • Single orderable “SKU” per Ready Node
  • Can be quoted/ordered as-is or customized

3. Benefit of choices

  • Work with your server OEM of choice
  • Choose the Ready Node profiles based on your workload
  • New license sales or for customers with ELA

The Virtual SAN Ready Nodes listed in this document are classified into HY-2 Series, HY-4 Series, HY-6 Series, HY-8 Series, AF-6 Series and AF-8 Series. The solution profiles are defined based on different levels of workload requirements for performance and capacity and each solution profile provides a different price/performance focus.
For guidelines on the hardware choices of a Virtual SAN solution, along with the infrastructure sizing assumptions and design considerations made to create the sample configurations, please refer to the Virtual SAN Hardware Quick Reference Guide.

In order to choose the right Virtual SAN Ready Node for your environment, follow this two-step process:

1. Refer to the Virtual SAN Hardware Quick Reference Guide. for guidance on how to identify the right solution profile category for your workload profile and the category of Ready Node that meets your needs
2. Choose Ready Nodes from your vendor of choice listed in this document that correspond to the solution profile category that you identified for your workload

Note: The Virtual Machine profiles including number of Virtual Machines per desktop are based on Virtual SAN 5.5. The Virtual SAN 6.0 numbers will be available after GA.

Download

Download out the full Virtual SAN Compatibility Guide Ready Nodes technical white paper.

Confirm your choice VMware Virtual SAN Hardware Compatibility Guide.

Rating: 5/5


May 16

VMware Horizon 6 Storage Considerations

Overview

This document addresses the challenges associated with end-user computing workloads in a virtualized environment and suggests design considerations for managing them. It focuses on performance, capacity, and operational considerations of the storage subsystem because storage is the foundation of any virtual desktop infrastructure (VDI) implementation. Where possible, it offers multiple solutions to common design choices faced by IT architects tasked with designing and implementing a VMware Horizon storage strategy.

Typical Storage Considerations

Over the years, many end-user computing environments were designed, engineered, and built without proper consideration for specific storage requirements. Some were built on existing shared storage platform offerings.
Others simply had their storage capacity increased without an examination of throughput and performance.
These oversights prevented some VDI projects from delivering on the promises of virtualization.
For success in design, operation, and scale, IT must be at least as diligent in the initial discovery and design phases as in deployment and testing. It is essential to have a strong methodology and a plan to adapt or prefine certain elements when technology changes. This document aims to provide guidance for the nuances of storage.
Operating systems are designed without consideration for virtualization technologies or their storage subsystems. This applies to all versions of the Windows operating system, both desktop and server, which are designed to interact with a locally connected magnetic disk resource.
The operating system expects at least one local hard disk to be dedicated to each single instance, giving the OS complete control from the device driver upward with respect to the reading, writing, caching, arrangement, and optimization of the file system components on the disk. When installing the operating system into a virtual machine running on a hypervisor, particularly when running several virtual machines simultaneously on that hypervisor, the IT architect needs to be aware of factors that affect how the operating system works.

VMware Horizon Architecture

Figure 1 presents a logical overview of a validated VMware Horizon® 6 design. The design includes VMware Horizon with View, VMware Workspace™ Portal, and VMware Mirage™, along with the recommended supporting infrastructure. These components work in concert to aggregate identity, access, virtual desktops, applications, and image management in a complete architecture.

NPMD data diagram

Figure 1. VMware Horizon Architecture

Capacity and Sizing Considerations

The primary storage considerations in an end-user computing infrastructure have two dimensions: performance and capacity, which are the focus of this paper.

Importance of IOPS

Input/Output Operations per Second (IOPS) is the performance measurement used to benchmark computer storage devices devices such as hard disk drives (HDD), solid-state drives (SSD), and storage area networks (SAN). Each disk type discussed in this document has a different IOPS performance statistic and should be evaluated independently.
When you consolidate multiple virtual machines and other user workloads on a hypervisor, you should understand the typical storage performance expected by a single operating system. This requires an understanding of the added contention for access to the storage subsystem that accompanies every subsequent guest operating system that you host on that hypervisor. Although IOPS cannot account for all performance requirements of a storage system, this measure is widely considered the single most important statistic. All the virtual assessment tools offered by VMware partners capture granular IOPS data, giving any IT architect the ability to optimize the storage accurately for end-user-computing workloads.

The Impact of Latency

Latency can definitely affect performance and in some cases might actually have a greater impact than IOPS. Even if your storage can deliver a million IOPS, it does not guarantee your end users an enjoyable virtual desktop or workspace experience.
When assessing latency, always look up and down the storage stack to get a clear understanding of where latency can build up. It is always good to start at the top layer of the storage stack, where the application is running in the guest operating system, to find the total amount of latency that the application is seeing. Virtualdisk latency is one of the key metrics that influences good or bad user experience.

NPMD data diagram

Figure 2. Storage Stack Overview

Download

Download out the full VMware Horizon 6 Storage Considerations technical white paper.

Rating: 5/5


May 15

vsanSparse – TechNote for Virtual SAN 6.0 Snapshots

Introduction

Virtual SAN 6.0 introduces a new on-disk format that includes VirstoFS technology. This always-sparse filesystem provides the basis for a new snapshot format, also introduced with Virtual SAN 6.0, called vsanSparse. Through the use of the underlying sparseness of the filesystem and a new, in-memory metadata cache for lookups, vsanSparse offers greatly improved performance when compared to previous virtual machine snapshot implementations.

Introducing vsanSparse snapshots

As mentioned in the introduction, Virtual SAN 6.0 has a new on-disk (v2) format that facilitates the introduction of a new type of performance-based snapshot. The new vsanSparse format leverages the underlying sparseness of the new VirstoFS filesystem (v2) on-disk format and a new in-memory caching mechanism for tracking updates. This v2 format is an always-sparse file system (512-byte block size instead of 1MB block size on VMFS-L) and is only available with Virtual SAN 6.0.

NPMD data diagram

Figure 1. vsanSparse disk format

When a virtual machine snapshot is created on Virtual SAN 5.5, a vmfsSparse/redo log object is created (you can find out more about this format in appendix A of this paper). In Virtual SAN 6.0, when a virtual machine snapshot is created, vsanSparse “delta” objects get created.

Why is vsanSparse needed?

The new vsanSparse snapshot format provides Virtual SAN administrators with enterprise class snapshots and clones. The goal is to improve snapshot performance by continuing to use the existing redo logs mechanism but now utilizing an “inmemory” metadata cache and a more efficient sparse filesystem layout.

How does vsanSparse work?

When a vsanSparse snapshot is taken of a base disk, a child delta disk is created. The parent is now considered a point-in-time (PIT) copy. The running point of the virtual machine is now the delta.New writes by the virtual machine go to the delta but the base disk and other snapshots in the chain satisfy reads. To get current state of the disk, one can take the “parent” disk and redo all writes from “children” chain.
Thus children are referred to as “redo logs”. In this way, vsanSparse format is very similar to the earlier vmfsSparse format.

Download

Download out the full vsanSparse – TechNote for Virtual SAN 6.0 Snapshots.

Rating: 5/5


May 15

What’s New in VMware vSphere™ 5.0 Networking

Introduction

With the release of VMware vSphere™ 5.0 (“vSphere”), VMware brings a number of powerful new features and enhancements to the networking capabilities of the vSphere platform. These new network capabilities enable customers to run business-critical applications with confidence and provide the flexibility to enable customers to respond to business needs more rapidly. All the networking capabilities discussed in this document are available only with the VMware vSphere Distributed Switch (Distributed Switch).

There are two broad types of networking capabilities that are new or enhanced in the VMware vSphere 5.0
release. The first type improves the network administrator’s ability to monitor and troubleshoot virtual
infrastructure traffic by introducing features such as:

  • NetFlow
  • Port mirror

The second type focuses on enhancements to the network I/O control (NIOC) capability first released in
vSphere 4.1. These NIOC enhancements target the management of I/O resources in consolidated I/O
environments with 10GB network interface cards. The enhancements to NIOC enable customers to provide
end-to-end quality of service (QoS) through allocating I/O shares for user-defined traffic types as well as tagging packets for prioritization by external network infrastructure. The following are the key NIOC
enhancements:

  • User-defned resource pool
  • vSphere replication trafc type
  • IEEE 802.1p tagging

The following sections will provide higher-level details on new and enhanced networking capabilities in vSphere 5.0.

Network Monitoring and Troubleshooting

In a vSphere 5.0 environment, virtual network switches provide connectivity for virtual machines running on VMware® ESXi™ hosts to communicate with each other as well as connectivity to the external physical
infrastructure. Network administrators want more visibility into this traffic that is flowing in the virtual infrastructure. This visibility will help them monitor and troubleshoot network issues. VMware vSphere 5.0 introduces two new features in the Distributed Switch that provide the required monitoring and troubleshooting capability to the virtual infrastructure.

NetFlow

NetFlow is a networking protocol that collects IP traffic information as records and sends them to a collector such as CA NetQoS for traffic flow analysis. VMware vSphere 5.0 supports NetFlow v5, which is the most common version supported by network devices. NetFlow capability in the vSphere 5.0 platform provides visibility into virtual infrastructure traffic that includes:

  • Intrahost virtual machine traffic (virtual machine–to–virtual machine traffic on the same host)
  • Interhost virtual machine traffic (virtual machine–to–virtual machine traffic on different hosts)
  • Virtual machine–physical infrastructure traffic

Figure 1 shows a Distributed Switch configured to send NetFlow records to a collector that is connected to an external network switch. The blue dotted line with arrow indicates the NetFlow session that is established to send flow records for the collector to analyze.

NetFlow Traffic

Figure 1. NetFlow Traffic

Usage

NetFlow capability on a Distributed Switch along with a NetFlow collector tool helps monitor application flows and measures flow performance over time. It also helps in capacity planning and ensuring that I/O resources are utilized properly by different applications, based on their needs.

IT administrators who want to monitor the performance of application flows running in the virtualized
environment can enable flow monitoring on a Distributed Switch.

Configuration

NetFlow on Distributed Switches can be enabled at the port group level, at an individual port level or at the uplink level. When configuring NetFlow at the port level, administrators should select the NetFlow override tab, which will make sure that flows are monitored even if the port group–level NetFlow is disabled.

Port Mirror

Port mirroring is the capability on a network switch to send a copy of network packets seen on a switch port to a network monitoring device connected to another switch port. Port mirroring is also referred to as Switch Port Analyzer (SPAN) on Cisco switches. In VMware vSphere 5.0, a Distributed Switch provides a similar port mirroring capability to that available on a physical network switch. After a port mirror session is configured with a destination—a virtual machine, a vmknic or an uplink port—the Distributed Switch copies packets to the destination. Port mirroring provides visibility into:

  • Intrahost virtual machine traffic (virtual machine–to–virtual machine traffic on the same host)
  • Interhost virtual machine traffic (virtual machine–to–virtual machine traffic on different hosts)

Figure 2 shows different types of traffic flows that can be monitored when a virtual machine on a host acts as a destination or monitoring device. All traffic shown by the orange dotted line with arrow is mirrored traffic that is sent to the destination virtual machine.

NetFlow Traffic

Figure 2. Port Mirror Traffic Flows When Destination Where Packets Are Mirrored Is a Virtual Machine

Usage

The port mirroring capability on a Distributed Switch is a valuable tool that helps network administrators in debugging network issues in a virtual infrastructure. The granular control over monitoring ingress, egress or all trafc of a port helps administrators fne-tune what trafc is sent for analysis.

Configuration

Port mirror configuration can be done at the Distributed Switch level, where a network administrator can create a port mirror session by identifying the traffic source that needs monitoring and the traffic destination where the traffic will be mirrored. The traffic source can be any port with ingress, egress or all traffic selected. The traffic destination can be any virtual machine, vmknic or uplink port.

Download

Download a full What’s New in VMware vSphere™ 5.0 Networking Technical White Paper.

Rating: 5/5