Oct 06

VMware® Virtual SAN™ Stretched Cluster – Bandwidth Sizing Guidance

TECHNICAL WHITE PAPER

Overview

The purpose of this document is to explain how to size bandwidth requirements for Virtual SAN in Stretched Cluster configurations. This document only covers the Virtual SAN network bandwidth requirements.

In Stretched Cluster configurations, two data fault domains have one or more hosts, and the third fault domain contains a witness host or witness appliance. In this document each data fault domain will be referred to as a site.

Virtual SAN Stretched Cluster configurations can be spread across distances, provided bandwidth and latency requirements are met.

H5

Streched Cluster Configuration

General Guidelines

The bandwidth requirement between the main sites is highly dependent on the workload to be run on Virtual SAN, amount of data, and handling of failure scenarios. Under normal operating conditions, the basic bandwidth requirements are:

H5

Basic bandwidth requirements

Bandwidth Requirements Between Sites

Workloads are seldom all reads or writes, and normally include a general read to write ratio for each use case.

A good example of this would be a VDI workload. During peak utilization, VDI often behaves with a 70/30 write to read ratio. That is to say that 70% of the IO is due to write operations and 30% is due to read IO. As each solution has many factors, true ratios should be measured for each workload.

Using the general situation where a total IO profile requires 100,000 IOPS, of which 70% are write, and 30% are read, in a Stretched configuration, the write IO is what is sized against for inter-site bandwidth requirements.

With Stretched Clusters, read traffic is, by default, serviced by the site that the VM resides on. This concept is called Read Locality.

The required bandwidth between two data sites (B) is equal to Write bandwidth (Wb) * data multiplier (md) * resynchronization multiplier (mr):

B = Wb * md * mr

The data multiplier is comprised of overhead for Virtual SAN metadata traffic and miscellaneous related operations. VMware recommends a data multiplier of 1.4
The resynchronization multiplier is included to account for resynchronizing events. It is recommended to allocate bandwidth capacity on top of required bandwidth capacity for resynchronization events.

Making room for resynchronization traffic, an additional 25% is recommended.

Bandwidth Requirements Between Witness & Data Sites

Witness bandwidth isn’t calculated in the same way as inter-site bandwidth requirements. Witnesses do not maintain VM data, but rather only component metadata.
It is important to remember that data is stored on Virtual SAN in the form of objects. Objects are comprised of 1 or more components of items such as:

  • VM Home or namespace
  • VM Swap object
  • Virtual Disks
  • SnapshotsM

Objects can be split into more than 1 component when the size is >255GB, and/or a Number of Stripes (stripe width) policy is applied. Additionally, the number of objects/components for a given Virtual Machine is multiplied
when a Number of Failures to Tolerate (FTT) policy is applied for data protection and availability.

The required bandwidth between the Witness and each site is equal to ~1138 B x Number of Components / 5s.

Download

Download a full VMware® Virtual SAN™ Stretched Cluster – Bandwidth Sizing Guidance technical white paper.

Rating: 5/5


Jun 13

NSX Distributed Firewalling Policy Rules Configuration Guide

Created by nikhilvmw on Sep 23, 2014 5:16 PM. Last modified by nikhilvmw on Nov 6, 2014 2:19 PM.
VMware NSX for vSphere, release 6.0.x.

This document covers how one can create security policy rules in VMware NSX. This will cover the different options of configuring security rules either through the Distributed Firewall or via the Service Composer User Interface. It will cover all the unique options NSX offers to create dynamic policies based on the infrastructure context.

Thanks to Francis Guillier, Kausum Kumar and Srini Nimmagadda for helping author this document.
Regards,
NSX Team

Introduction

VMware NSX Distributed Firewall (DFW) provides the capability to enforce firewalling functionality directly at the Virtual Machines (VM) vNIC layer. It is a core component of the micro-segmentation security model where east-west traffic can now be inspected at near line rate processing, preventing any lateral move type of attack.

This technical brief gives details about DFW policy rule configuration with NSX. Both DFW security policy objects and DFW consumption model will be discussed in this document.

We assume reader has already some knowledge on DFW and Service Composer functions. Please refer to the appropriate collateral if you need more information on these NSX components.

Distributed Firewall Object Grouping Model

NSX provides the capability to micro-segment your SDDC to provide an effective security posture. To implement micro-segmentation in your SDDC, NSX provides you various ways of grouping VMs and applying security policies to them. This document specifies in detail different ways groupings can be done and details on when you should use one over the other.
Security policy rules can be written in various ways as shown below:

Network Based Policies:

    This is the traditional approach of grouping based on L2 or L3 elements. Grouping can be based on MAC addresses or IP addresses or a combination of both. NSX supports this approach of grouping objects. The security team needs to aware of networking infrastructure to deploy network-based policies. There is a high probability of security rule sprawl as grouping based on dynamic attributes is not used. This method of grouping works great if you are migrating existing rules from a different vendor’s firewall.

Network Based Policies

When not to use this: In dynamic environments, e.g. Self-Service IT; Cloud automated deployments, where you are adding/deleting of VMs and application topologies at a rapid rate, MAC addressed based grouping approach may not be suitable as there will be delay between provisioning a VM and adding the MAC addresses to the group. If you have an environment with high mobility like vMotion and HA, L3/IP based grouping approaches may not be adequate either.

Infrastructure Based Policies:

    In this approach, grouping is based on SDDC infrastructure like vCenter clusters, logical switches, distributed port groups, etc. An example of this would be, clusters 1 to cluster 4
    are earmarked for PCI kind of applications. In such a case, grouping can be done based on cluster names and rules can be enforced based on these groups. Another example would be, if you know which logical switches in your environment are connected to which applications. E.g. App Tier Logical switch contains all VMs pertaining to application ‘X’. The security team needs to work closely with the vCenter administration team to understand logical and physical boundaries.

    When not to use this: If there are no physical or logical boundaries in your SDDC environment then this type of approach is not suitable. Also, you need to be very careful where you can deploy your applications. For example, if you would like to deploy a PCI workload to any cluster that has adequate compute resources available; the security posture cannot be tied to a cluster but should move with the application.

Application Based Policies:

    In this approach, grouping is based on the application type (e.g: VMs tagged as “Web_Servers”), application environment (e.g: all resources tagged as “Production_Zone”) and application security posture. The advantage of this approach is that the security posture of the application is not tied down to either network constructs or SDDC infrastructure. Security policies can move with the application irrespective of network or infrastructure boundaries. Policies can be templated and reusable across instances of same types of applications and workloads. You can use variety of mechanisms to group. The security team needs to be aware of only the application that it is trying to secure based on the policies. The security policies follow the application life cycle, i.e. comes alive when the application is deployed and is destroyed when the application is decommissioned.

    When not to use this: If the environment is pretty static without mobility and infrastructure functions are properly demarcated. You do not need to use application-based policies.

    Application-based policy approach will greatly aid in moving towards a Self-Service IT model. The Security team needs to be only aware of how to secure an application without knowing the underlying topology. Concise and reusable security rules will require application awareness. Thus a proper security posture can be developed via application based policies.

NSX Security-Groups

Security-Groups is a container-construct which allows to group vCenter objects into a common entity.
When defining a Security-Groups, multiple inclusion and exclusion can be used as shown in the diagram below:

NSX Security Groups

Download

Download a full VMware NSX DFW Policy Rules Configuration Technical White Paper

Rating: 5/5


May 17

Virtual SAN Compatibility Guide

VMware Virtual SAN Ready Nodes

The purpose of this document is to provide VMware Virtual SAN™ Ready Node configurations from OEM vendors. Virtual SAN Ready Node is a validated server configuration in a tested, certified hardware form factor for Virtual SAN deployment,jointly recommended by the server OEM and VMware. Virtual SAN Ready Nodes are ideal as hyper-converged building blocks for larger data center environments looking for automation and a need to customize hardware and software configurations.

Virtual SAN Ready Node is a turnkey solution for accelerating Virtual SAN deployment with following benefits:

1. Complete server configurations jointly recommended by the server OEM and VMware

  • Complete with the size, type and quantity of CPU, Memory, Network, I/O Controller, HDD and SSD combined with a certified server that is best suited to run a specific Virtual SAN workload
  • Most Ready Nodes come pre-loaded with vSphere and Virtual SAN if user decides to quote/order as-is

2. Easy to order and faster time to market

  • Single orderable “SKU” per Ready Node
  • Can be quoted/ordered as-is or customized

3. Benefit of choices

  • Work with your server OEM of choice
  • Choose the Ready Node profiles based on your workload
  • New license sales or for customers with ELA

The Virtual SAN Ready Nodes listed in this document are classified into HY-2 Series, HY-4 Series, HY-6 Series, HY-8 Series, AF-6 Series and AF-8 Series. The solution profiles are defined based on different levels of workload requirements for performance and capacity and each solution profile provides a different price/performance focus.
For guidelines on the hardware choices of a Virtual SAN solution, along with the infrastructure sizing assumptions and design considerations made to create the sample configurations, please refer to the Virtual SAN Hardware Quick Reference Guide.

In order to choose the right Virtual SAN Ready Node for your environment, follow this two-step process:

1. Refer to the Virtual SAN Hardware Quick Reference Guide. for guidance on how to identify the right solution profile category for your workload profile and the category of Ready Node that meets your needs
2. Choose Ready Nodes from your vendor of choice listed in this document that correspond to the solution profile category that you identified for your workload

Note: The Virtual Machine profiles including number of Virtual Machines per desktop are based on Virtual SAN 5.5. The Virtual SAN 6.0 numbers will be available after GA.

Download

Download out the full Virtual SAN Compatibility Guide Ready Nodes technical white paper.

Confirm your choice VMware Virtual SAN Hardware Compatibility Guide.

Rating: 5/5


May 16

VMware Horizon 6 Storage Considerations

Overview

This document addresses the challenges associated with end-user computing workloads in a virtualized environment and suggests design considerations for managing them. It focuses on performance, capacity, and operational considerations of the storage subsystem because storage is the foundation of any virtual desktop infrastructure (VDI) implementation. Where possible, it offers multiple solutions to common design choices faced by IT architects tasked with designing and implementing a VMware Horizon storage strategy.

Typical Storage Considerations

Over the years, many end-user computing environments were designed, engineered, and built without proper consideration for specific storage requirements. Some were built on existing shared storage platform offerings.
Others simply had their storage capacity increased without an examination of throughput and performance.
These oversights prevented some VDI projects from delivering on the promises of virtualization.
For success in design, operation, and scale, IT must be at least as diligent in the initial discovery and design phases as in deployment and testing. It is essential to have a strong methodology and a plan to adapt or prefine certain elements when technology changes. This document aims to provide guidance for the nuances of storage.
Operating systems are designed without consideration for virtualization technologies or their storage subsystems. This applies to all versions of the Windows operating system, both desktop and server, which are designed to interact with a locally connected magnetic disk resource.
The operating system expects at least one local hard disk to be dedicated to each single instance, giving the OS complete control from the device driver upward with respect to the reading, writing, caching, arrangement, and optimization of the file system components on the disk. When installing the operating system into a virtual machine running on a hypervisor, particularly when running several virtual machines simultaneously on that hypervisor, the IT architect needs to be aware of factors that affect how the operating system works.

VMware Horizon Architecture

Figure 1 presents a logical overview of a validated VMware Horizon® 6 design. The design includes VMware Horizon with View, VMware Workspace™ Portal, and VMware Mirage™, along with the recommended supporting infrastructure. These components work in concert to aggregate identity, access, virtual desktops, applications, and image management in a complete architecture.

NPMD data diagram

Figure 1. VMware Horizon Architecture

Capacity and Sizing Considerations

The primary storage considerations in an end-user computing infrastructure have two dimensions: performance and capacity, which are the focus of this paper.

Importance of IOPS

Input/Output Operations per Second (IOPS) is the performance measurement used to benchmark computer storage devices devices such as hard disk drives (HDD), solid-state drives (SSD), and storage area networks (SAN). Each disk type discussed in this document has a different IOPS performance statistic and should be evaluated independently.
When you consolidate multiple virtual machines and other user workloads on a hypervisor, you should understand the typical storage performance expected by a single operating system. This requires an understanding of the added contention for access to the storage subsystem that accompanies every subsequent guest operating system that you host on that hypervisor. Although IOPS cannot account for all performance requirements of a storage system, this measure is widely considered the single most important statistic. All the virtual assessment tools offered by VMware partners capture granular IOPS data, giving any IT architect the ability to optimize the storage accurately for end-user-computing workloads.

The Impact of Latency

Latency can definitely affect performance and in some cases might actually have a greater impact than IOPS. Even if your storage can deliver a million IOPS, it does not guarantee your end users an enjoyable virtual desktop or workspace experience.
When assessing latency, always look up and down the storage stack to get a clear understanding of where latency can build up. It is always good to start at the top layer of the storage stack, where the application is running in the guest operating system, to find the total amount of latency that the application is seeing. Virtualdisk latency is one of the key metrics that influences good or bad user experience.

NPMD data diagram

Figure 2. Storage Stack Overview

Download

Download out the full VMware Horizon 6 Storage Considerations technical white paper.

Rating: 5/5


May 15

vsanSparse – TechNote for Virtual SAN 6.0 Snapshots

Introduction

Virtual SAN 6.0 introduces a new on-disk format that includes VirstoFS technology. This always-sparse filesystem provides the basis for a new snapshot format, also introduced with Virtual SAN 6.0, called vsanSparse. Through the use of the underlying sparseness of the filesystem and a new, in-memory metadata cache for lookups, vsanSparse offers greatly improved performance when compared to previous virtual machine snapshot implementations.

Introducing vsanSparse snapshots

As mentioned in the introduction, Virtual SAN 6.0 has a new on-disk (v2) format that facilitates the introduction of a new type of performance-based snapshot. The new vsanSparse format leverages the underlying sparseness of the new VirstoFS filesystem (v2) on-disk format and a new in-memory caching mechanism for tracking updates. This v2 format is an always-sparse file system (512-byte block size instead of 1MB block size on VMFS-L) and is only available with Virtual SAN 6.0.

NPMD data diagram

Figure 1. vsanSparse disk format

When a virtual machine snapshot is created on Virtual SAN 5.5, a vmfsSparse/redo log object is created (you can find out more about this format in appendix A of this paper). In Virtual SAN 6.0, when a virtual machine snapshot is created, vsanSparse “delta” objects get created.

Why is vsanSparse needed?

The new vsanSparse snapshot format provides Virtual SAN administrators with enterprise class snapshots and clones. The goal is to improve snapshot performance by continuing to use the existing redo logs mechanism but now utilizing an “inmemory” metadata cache and a more efficient sparse filesystem layout.

How does vsanSparse work?

When a vsanSparse snapshot is taken of a base disk, a child delta disk is created. The parent is now considered a point-in-time (PIT) copy. The running point of the virtual machine is now the delta.New writes by the virtual machine go to the delta but the base disk and other snapshots in the chain satisfy reads. To get current state of the disk, one can take the “parent” disk and redo all writes from “children” chain.
Thus children are referred to as “redo logs”. In this way, vsanSparse format is very similar to the earlier vmfsSparse format.

Download

Download out the full vsanSparse – TechNote for Virtual SAN 6.0 Snapshots.

Rating: 5/5


May 10

VMware® Virtual SAN™ Hardware Guidance

Introduction

VMware® Virtual SAN™ is the industry’s first scale-out, hypervisor-converged storage solution based on the industry-leading VMware vSphere® solution. Virtual SAN is a software-defined storage solution that enables great flexibility and vendor choice in hardware platform.

This document provides guidance regarding hardware decisions—based on creating Virtual SAN solutions using VMware Compatibility Guide–certified hardware—when designing and deploying Virtual SAN. These decisions include the selection of server form factor, SSD, HDD, storage controller, and networking components.
This document does not supersede the official hardware compatibility information found in the VMware Compatibility Guide, which is the single source for up-to-date Virtual SAN hardware-compatibility information and must be used for a list of officially supported hardware.
When designing a Virtual SAN cluster from a sum of VMware Compatibility Guide–certified vendor components, this guide should be used in combination with the VMware® Virtual SAN™ Design and Sizing Guide. the Virtual SAN sizing tool, and other official Virtual SAN documentation from VMware Technical Marketing and VMware Technical Publications.

Conclusion

VMware Virtual SAN is a groundbreaking storage solution that enables unprecedented hardware configuration flexibility through building an individual solution based on preferred server vendor components. The guidance provided in this document enables users to make the best choice regarding their particular storage needs for their software-defined datacenter based on VMware vSphere. When selecting hardware components for a Virtual SAN solution, users should always utilize the VMware Compatibility Guide as the definitive resource tool.

Download

Download out the full VMware® Virtual SAN™ Hardware Guidance technical white paper.

Rating: 5/5


May 10

Virtual SAN Hardware Quick Reference Guide

Overview

The purpose of this document is to provide sample server configurations as directional guidelines for use with VMware® Virtual SAN™. Use these guidelines as your first step toward determining the configuration for Virtual SAN.

How to use this document

1. Determine your workload profile requirement for your use case.
2. Refer to Ready Node profiles to determine the approximate configuration that meets your needs.
3. Use VSAN Hardware Compatibility Guide to pick a Ready Node aligned with the selected profile from the OEM server vendor of choice.

Additional Resources

For more detail on Virtual SAN Design guidance, see
1. Virtual SAN Ready Node Configurator
2. Virtual SAN Hardware Guidance
3. VMware® Virtual SAN™ 6.0 Design and Sizing Guide.
4. Virtual SAN Sizing Calculator
5. VSAN Assessment Tool

Download

Download out the full Virtual SAN Hardware Quick Reference Guide technical white paper.

Rating: 5/5


Apr 10

VMware VSAN 6.2 for ESXi 6.0 with Horizon View Technical Whitepaper

Executive summary

VMware Virtual SAN is a software defined storage solution introduced by VMware in 2012 that allows you
to create a clustered data store from the storage (SSDs and HDDs, or all-flash using SSDs and PCIeSSDs) that is present in the ESXi hosts. The Virtual SAN solution simplifies storage management through objectbased storage systems and fully supports vSphere enterprise features such as HA, DRS and vMotion. The Virtual SAN storage cluster must be made up of at least three ESXi servers. VMware Virtual SAN is built into the ESXi 6.0 hypervisor and can be used with ESXi hosts that are configured with PERC RAID controllers.

To be able to use Virtual SAN in a hybrid configuration, which is the context of this document, you will need at least one SSD and one HDD in each of the servers participating in the Virtual SAN cluster and it’s important to note that the SSD doesn’t contribute to the storage capacity. The SSDs are used for read cache and write buffering whereas the HDD’s are there to offer persistent storage. Virtual SAN is highly available as it’s based on the distributed object-based RAIN (redundant array of independent nodes) architecture. Virtual SAN is fully integrated with vSphere. It aims to simplify storage placement decisions for vSphere administrators and its goal is to provide both high availability as well as scale out storage functionality.

Download

Download out the full VMware VSAN 6.2 for ESXi 6.0 with Horizon View Technical Whitepaper

Rating: 5/5


Apr 09

VMware vSAN 6.2 Technical FAQ

Radically simple, enterprise-class native storage for vSphere

Q: What is VMware Virtual SAN?
Q: What are the use cases for Virtual SAN?
Q: What are the most significant new capabilities in
Virtual SAN 6.2?
Q: What are the software requirements for
Virtual SAN?
Q: What are the hardware requirements for
Virtual SAN?
Q: Why use Virtual SAN Ready Nodes?
Q: What configurations are not supported?
Q: Are all Virtual SAN nodes required to carry disks?
Q: Which types of virtual switches are supported?

Download

Download out the full VMware vSAN 6.2 Frequently Asked Questions

Rating: 5/5


Apr 04

USB Device Redirection, Configuration, and Usage in View Virtual Desktops

Introduction

In the 5.1 release of View, VMware introduced some complex configuration options for the usage and management of USB devices in a View virtual desktop session. This white paper gives a high-level overview of USB remoting, discusses the configuration options, and provides some practical worked examples to illustrate how these options can be used.

USB Redirection Overview

We are all familiar with using USB devices on laptop or desktop machines. If you are working in a virtual desktop infrastructure (VDI) environment such as View, you may want to use your USB devices in the virtual desktop, too. USB device redirection is a function in View that allows USB devices to be connected to the virtual desktop as if they had been physically plugged into it. Typically, the user selects a device from the VMware Horizon Client menu and selects it to be forwarded to the virtual desktop. After a few moments, the device appears in the guest virtual machine, ready for use.

NPMD data diagram

Figure 1. USB Redirection

Definitions of Terms

In this paper, various terms are used to describe the components involved in USB redirection. The following are some brief definitions of terms:

  • USB redirection – Forwarding of the functions of a USB device from the physical endpoint to the View virtual machine.
  • Client computer, or client, or client machine – Physical endpoint displaying the virtual desktop with which the user interfaces, and where the USB device is physically plugged in.
  • Virtual desktop or guest virtual machine – The Windows desktop stored in the data center that is displayed remotely on the endpoint. This virtual desktop runs a Windows guest operating system, and has the View Agent installed on it.
  • Soft client – Horizon Client in software format, such as a Horizon Client for Windows or Linux. The soft client is installed on a hardware endpoint, such as a laptop, and displays the virtual desktop on the endpoint.
  • Zero client – A hardware-based client used to connect to a View desktop. Stateless device containing no operating system. Delivers the client login interface for View.
  • Thin client – A hardware device similar to a zero client, but with an OS installed. The Horizon Client is installed onto the OS of the thin client. Both devices generally lack local user-accessible storage and simply connect to the virtual desktop in the data center.
  • USB interface – A function within a USB device, such as mouse or keyboard or audio. Some USB devices have multiple functions and are called composite (USB) devices.
  • Composite (USB) device – A USB device with multiple functions, or interfaces.
  • HID – Human interface device. A device with which the user physically interacts, such as mice, keyboards, and joysticks.
  • VID – The vendor identification, or code, for a USB device, which identifies the vendor that produced the device.
  • PID – The product identification, or code, which, combined with the VID, uniquely identifies a USB device within a vendor’s family of USB products. The VID and PID are used within View USB configuration settings to identify the specific driver needed for the device.
  • USB device filtering – Restricting some USB devices from being forwarded from the endpoint to the virtual desktop. You specify which devices will be prevented from being forwarded: individual VID-PID device models, device families, such as storage devices, or devices from specific vendors.
  • USB device splitting – The ability to configure the USB device such that when connected to a View desktop leaves some of the USB interfaces local to the client endpoint, and other interfaces forwarded to the guest. This can result in an improved user experience of the device in a virtual environment.
  • USB Boolean settings – Simple “on” or “off” settings. For example, whether a specific feature is enabled (true) or disabled (false).

Download

Download out the full USB Device Redirection, Configuration, and Usage in View Virtual Desktops white paper.

Rating: 5/5