Apr 24

Best Practices for using VMware Converter

This video provides an overview of the best practices for converting a machine with VMware Converter. This video is based on VMware knowledge base article 1004588. This video also provides tips to consider when converting your machine. The video can help you avoid some of these errors:
Unknown error returned by VMware Converter Agent
Out of disk space
Failed to establish Vim connection
Import host not found
P2VError UFAD_SYSTEM_ERROR(Internal Error)
Pcopy_CloneTree failed with err=80
The file exists (80)
Failed to connect
Giving up trying to connect
Failed to take snapshot of the source volume
stcbasic.sys not installed or snapshot creation failed. err=2
Can’t create undo folder
sysimage.fault.FileCreateError
sysimage.fault.ReconfigFault
sysimage.fault.PlatformError
Number of virtual devices exceeds maximum for a given controller
TooManyDevices
QueryDosDevice: ret=270 size=1024 err=0
Error opening disk device: Incorrect function (1)
Vsnap does not have admin rights
Specified key identifier already exists
vim.fault.NoDiskSpace
Check out Amazon’s selection of books on VMware: http://amzn.to/2pZInmt

Rating: 5/5


Mar 11

VMware vSphere Encrypted vMotion Architecture, Performance and Best Practices

Executive Summary

With the rise in popularity of hybrid cloud computing, where VM-sensitive data leaves the traditional IT environment and traverses over the public networks, IT administrators and architects need a simple and secure way to protect critical VM data that traverses across clouds and over long distances.

The Encrypted vMotion feature available in VMware vSphere® 6.5 addresses this challenge by introducing a software approach that provides end-to-end encryption for vMotion network traffic. The feature encrypts all the vMotion data inside the vmkernel by using the most widely used AES-GCM encryption standards, and thereby provides data confidentiality, integrity, and authenticity even if vMotion traffic traverses untrusted network links.

Experiments conducted in the VMware performance labs using industry-standard workloads show the following:

  • vSphere 6.5 Encrypted vMotion performs nearly the same as regular, unencrypted vMotion.
  • The CPU cost of encrypting vMotion traffic is very moderate, thanks to the performance optimizations added to the vSphere 6.5 vMotion code path.
  • vSphere 6.5 Encrypted vMotion provides the proven reliability and performance guarantees of regular, unencrypted vMotion, even across long.

Introduction

VMware vSphere® vMotion® [1] provides the ability to migrate a running virtual machine from one vSphere host to another, with no perceivable impact to the virtual machine’s performance. vMotion brings enormous benefits to administrators—it reduces server downtime and facilitates automatic load-balancing.

During migration, the entire memory and disk state associated with a VM, along with its metadata, are transferred over the vMotion network. It is possible during VM migration for an attacker with sufficient network privileges to compromise a VM by modifying its memory contents during the transit to subvert the VM’s applications or its guest operating system. Due to this possible security risk, VMware highly recommended administrators use an isolated or secured network for vMotion traffic, separate from other datacenter networks such as the management network or provisioning network. This protected the VM’s sensitive data as it traversed over a secure network.

Even though this recommended approach adds slightly higher network and administrative complexity, it works well in a traditional IT environment where the customer owns the complete network infrastructure and can secure it. In a hybrid cloud, however, workloads move dynamically between clouds and datacenters over secured and unsecured network links. Therefore, it is essential to secure sensitive vMotion traffic at the network endpoints. This protects critical VM data even as the vMotion traffic leaves the traditional IT environment and traverses over the public networks.

vSphere 6.5 introduces Encrypted vMotion, which provides end-to-end encryption of vMotion traffic and protects VM data from eavesdropping occurrences on untrusted network links. Encrypted vMotion provides complete confidentiality, integrity, and authenticity of the data transferred over a vMotion network without any requirement for dedicated networks or additional hardware.

The sections that follow describe:

  • vSphere 6.5 Encrypted vMotion technology and architecture
  • How to configure Encrypted vMotion from the vSphere Client
  • Performance implications of encrypting vMotion traffic using real-life workload scenarios
  • Best practices for deployment

Encrypted vMotion Architecture

vMotion uses TCP as the transport protocol for migrating the VM data. To secure VM migration, vSphere 6.5 encrypts all the vMotion traffic, including the TCP payload and vMotion metadata, using the most widely used AES-GCM encryption standard algorithms, provided by the FIPS-certified vmkernel vmkcrypto module.
Workflow Encrypted vMotion

Encryption Protocol

Encrypted vMotion does not rely on the Secure Sockets Layer (SSL) or Internet Protocol Security (IPsec) technologies for securing vMotion traffic. Instead, it implements a custom encrypted protocol above the TCP layer. This is done primarily for performance, but also for reasons explained below.
SSL is compute intensive and completely implemented in user space, while vMotion, which constitutes core ESXi, executes in kernel space. This means, if vMotion were to rely on SSL, each encryption/decryption call would need to traverse across kernel and user spaces, thereby resulting in excessive performance overhead. Using the encryption algorithms provided by the vmkernel vmkcrypto module enables vMotion to avoid such a performance penalty.

Although IPSec can be used to secure vMotion traffic, its usability is limited in the vSphere environment because ESXi hosts support IPsec only for IPv6 traffic, but not for IPv4 traffic. Besides, implementing a custom protocol above the TCP layer gives vMotion the ability to create the appropriate number of vMotion worker threads, and coordinate efficiently among them to spread the encryption/decryption CPU load across multiple cores.

Download

Download a full VMware vSphere Encrypted vMotion Architecture, Performance and Best Practices Study.

Rating: 5/5


Mar 10

Architecting Microsoft SQL Server on VMware vSphere Best Practices Guide

Introduction

Microsoft SQL Server is one of the most widely deployed database platforms in the world, with many
organizations having dozens or even hundreds of instances deployed in their environments. The flexibility
of SQL Server, with its rich application capabilities combined with the low costs of x86 computing, has led
to a wide variety of SQL Server installations ranging from large data warehouses to small, highly
specialized departmental and application databases. The flexibility at the database layer translates
directly into application flexibility, giving end users more useful application features and ultimately
improving productivity.

Application flexibility often comes at a cost to operations. As the number of applications in the enterprise
continues to grow, an increasing number of SQL Server installations are brought under lifecycle
management. Each application has its own set of requirements for the database layer, resulting in
multiple versions, patch levels, and maintenance processes. For this reason, many application owners
insist on having an SQL Server installation dedicated to an application. As application workloads vary
greatly, many SQL Server installations are allocated more hardware than they need, while others are
starved for compute resources.

Purpose

This document provides best practice guidelines for designing Microsoft SQL Server on vSphere. The
recommendations are not specific to any particular set of hardware or to the size and scope of any
particular SQL Server implementation. The examples and considerations in this document provide
guidance only and do not represent strict design requirements, as varying application requirements would
result in many valid configuration possibilities.

vSphere Best Practices for SQL Server

A properly designed virtualized SQL Server using vSphere setup is crucial to the successful
implementation of enterprise applications. One main difference between designing for performance of
critical databases and designing for consolidation, which is the traditional practice when virtualizing, is
that when you design for performance you strive to reduce resource contention between virtual machines
as much as possible and even eliminate contention altogether. The following sections outline VMware
recommended practices for designing your vSphere environment to optimize for best performance.

3.1 Right Sizing

Right sizing is a term that is used when sizing virtual machines to contrast with sizing practices of physical servers. For example, a DBA determines that the number of CPUs required for a newly designed database server is eight CPUs. When deployed on a physical machine, typically the DBA will ask for more CPU power than the requirements at that point in time, sometimes even twice as much. The reason for this is usually that it is difficult for the DBA to add CPUs to this physical server after it has been deployed.

The general practice is to purchase the extra resources (CPU, disk, network, and memory) for the
physical server to accommodate for future growth requirements, sizing miscalculations, and any
unforeseen circumstances that can cause the database to require more resources in the future than
originally anticipated.

Download

Download a full Architecting Microsoft SQL Server on VMware vSphere Best Practices Guide.

Rating: 5/5


Dec 14

Configuration Maximum changes from vSphere 6.0 to vSphere 6.5

vSphere 6.5 is now available and with every release VMware makes changes to the configuration maximums for vSphere. Since VMware never highlights what has changed between releases in their official Configuration Maximum 6.5 documentation and compare the document with the vSphere 6.0 Configuration Maximums. The changes between the versions are here.

Configuration Sphere 6.5 vSphere 6.0

Virtual Machines Maximums

RAM per VM 6128GB 4080GB
Virtual NVMe adapters per VM 4 N/A
Virtual NVMe targets per virtual SCSI adapter 15 N/A
Virtual NVMe targets per VM 60 N/A
Virtual RDMA Adapters per VM 1 N/A
Video memory per VM 2GB 512MB

ESXi Host Maximums

Logical CPUs per host 576 480
RAM per host 12TB 6TB *some exceptions
LUNs per server 512 256
Number of total paths on a server 2048 1024
FC LUNs per host 512 256
LUN ID 0 to 16383 0 to 1023
VMFS Volumes per host 512 256
FT virtual machines per cluster 128 98

vCenter Maximum

Hosts per vCenter Server 2000 1000
Powered-on VMs per vCenter Server 25000 10000
Registered VMs per vCenter Server 35000 15000
Number of host per datacenter 2000 500
Maximum mixed vSphere Client (HTML5) + vSphere Web
Client simultaneous connections per VC
60 (30 Flex, 30 maximum HTML5) N/A
Maximum supported inventory for vSphere Client
(HTML5)
10,000 VMs, 1,000 Hosts N/A
Host Profile Datastores 256 120
Host Profile Created 500 1200
Host Profile Attached 500 1000

Platform Services Controller Maximums

Maximum PSCs per vSphere Domain 10 8

vCenter Server Extensions Maximums

[VUM] VMware Tools upgrade per ESXi host 30 24
[VUM] Virtual machine hardware upgrade per host 30 24
[VUM] VMware Tools scan per VUM server 200 90
[VUM] VMware Tools upgrade per VUM server 200 75
[VUM] Virtual machine hardware scan per VUM server 200 90
[VUM] Virtual machine hardware upgrade per VUM server 200 75
[VUM] ESXi host scan per VUM server 232 75
[VUM] ESXi host patch remediation per VUM server 232 71
[VUM] ESXi host upgrade per VUM server 232 71

Virtual SAN Maximums

Virtual machines per cluster 6000 6400
Number of iSCSI LUNs per Cluster 1024 N/A
Number of iSCSI Targets per Cluster 128 N/A
Number of iSCSI LUNs per Target 256 N/A
Max iSCSI LUN size 62TB N/A
Number of iSCSI sessions per Node 1024 N/A
iSCSI IO queue depth per Node 4096 N/A
Number of outstanding writes per iSCSI LUN 128 N/A
Number of outstanding IOs per iSCSI LUN 256 N/A
Number of initiators who register PR key for a iSCSI LUN 64 N/A

Storage Policy Maximums

Maximum number of VM storage policies 1024 Not Published
Maximum number of VASA providers 1024 Not Published
Maximum number of rule sets in VM storage
policy
16 N/A
Maximum capabilities in VM storage policy
rule set
64 N/A
Maximum vSphere tags in virtual machine storage policy 128 Not Published

Download

Download a full VMware vSphere 6.5 Configuration Maximums.
Download a full VMware vSphere 6.0 Configuration Maximums.

Rating: 5/5


Nov 01

VMware® Virtual SAN™ 6.2 Stretched Cluster & 2 Node Guide

Jase McCarty
Storage and Availability Business Unit
VMware
v 6.2.0 / March 2016 / version 0.30

Introduction

VMware Virtual SAN 6.1, shipping with vSphere 6.0 Update 1, introduced a new feature called VMware Virtual SAN Stretched Cluster. Virtual SAN Stretched Cluster is a specific configuration implemented in environments where disaster/downtime avoidance is a key requirement. This guide was developed to provide additional insight and information for installation, configuration and operation of a Virtual SAN Stretched Cluster infrastructure in conjunction with VMware vSphere. This guide will explain how vSphere handles specific failure scenarios and discuss various design considerations and operational procedures.

Virtual SAN Stretched Clusters with Witness Host refers to a deployment where a user sets up a Virtual SAN cluster with 2 active/active sites with an identical number of ESXi hosts distributed evenly between the two sites. The sites are connected via a high bandwidth/low latency link.

The third site hosting the Virtual SAN Witness Host is connected to both of the active/active data-sites. This connectivity can be via low bandwidth/high latency links.

H5

Streched Cluster Configuration

Each site is configured as a Virtual SAN Fault Domain. The nomenclature used to describe a Virtual SAN Stretched Cluster configuration is X+Y+Z, where X is the number of ESXi hosts at data site A, Y is the number of ESXi hosts at data site B, and Z is the number of witness hosts at site C. Data sites are where virtual machines are deployed. The minimum supported configuration is 1+1+1 (3 nodes). The maximum configuration is 15+15+1 (31 nodes).

In Virtual SAN Stretched Clusters, there is only one witness host in any configuration. A virtual machine deployed on a Virtual SAN Stretched Cluster will have one copy of its data on site A, a second copy of its data on site B and any witness components placed on the witness host in site C. This configuration is achieved through fault domains alongside hosts and VM groups, and affinity rules. In the event of a complete site failure, there will be a full copy of the virtual machine data as well as greater than 50% of the components available. This will allow the virtual machine to remain available on the Virtual SAN datastore. If the virtual machine needs to be restarted on the other site, vSphere HA will handle this task.

Support Statements

vSphere versions

Virtual SAN Stretched Cluster configurations require vSphere 6.0 Update 1 (U1) or greater. This implies both vCenter Server 6.0 U1 and ESXi 6.0 U1. This version of vSphere includes Virtual SAN version 6.1. This is the minimum version required for Virtual SAN Stretched Cluster support.

vSphere & Virtual SAN

Virtual SAN version 6.1 introduced features including both All-Flash and Stretched Cluster functionality. There are no limitations on the edition of vSphere used for Virtual SAN. However, for Virtual SAN Stretched Cluster functionality, vSphere DRS is very desirable. DRS will provide initial placement assistance, and will also automatically migrate virtual machines to their correct site in accordance to Host/VM affinity rules. It can also help will locating virtual machines to their correct site when a site recovers after a failure. Otherwise the administrator will have to manually carry out these tasks. Note that DRS is only available in Enterprise edition and higher of vSphere.

Hybrid and All-Flash support

Virtual SAN Stretched Cluster is supported on both hybrid configurations (hosts with local storage comprised of both magnetic disks for capacity and flash devices for cache) and all-flash configurations (hosts with local storage made up of flash devices for capacity and flash devices for cache).

On-disk formats

VMware supports Virtual SAN Stretched Cluster with the v2 on-disk format only. The v1 on-disk format is based on VMFS and is the original on-disk format used for Virtual SAN. The v2 on-disk format is the version which comes by default with Virtual SAN version 6.x. Customers that upgraded from the original Virtual SAN 5.5 to Virtual SAN 6.0 may not have upgraded the on-disk format for v1 to v2, and are thus still using v1. VMware recommends upgrading the on-disk format to v2 for improved performance and scalability, as well as stretched cluster support. In Virtual SAN 6.2 clusters, the v3 on-disk format allows for additional features, discussed later, specific to 6.2.

Features supported on VSAN but not VSAN Stretched Clusters

The following are a list of products and features support on Virtual SAN but not on a stretched cluster implementation of Virtual SAN.

  • SMP-FT, the new Fault Tolerant VM mechanism introduced in vSphere 6.0, is supported on standard VSAN 6.1 deployments, but it is not supported on stretched cluster VSAN deployments at this time. *The exception to this rule, is when using 2 Node configurations in the same physical location.
  • The maximum value for NumberOfFailuresToTolerate in a Virtual SAN Stretched Cluster configuration is 1. This is the limit due to the maximum number of Fault Domains being 3.
  • In a Virtual SAN Stretched Cluster, there are only 3 Fault Domains. These are typically referred to as the Preferred, Secondary, and Witness Fault Domains. Standard Virtual SAN configurations can be comprised of up to 32 Fault Domains.
  • The Erasure Coding feature introduced in Virtual SAN 6.2 requires 4 Fault Domains for RAID5 type protection and 6 Fault Domains for RAID6 type protection. Because Stretched Cluster configurations only have 3 Fault Domains, Erasure Coding is not supported on Stretched Clusters at this time.

Download

Download a full VMware Virtual SAN 6.2 Stretched Cluster Guide.

Rating: 5/5


Oct 06

VMware® Virtual SAN™ Stretched Cluster – Bandwidth Sizing Guidance

TECHNICAL WHITE PAPER

Overview

The purpose of this document is to explain how to size bandwidth requirements for Virtual SAN in Stretched Cluster configurations. This document only covers the Virtual SAN network bandwidth requirements.

In Stretched Cluster configurations, two data fault domains have one or more hosts, and the third fault domain contains a witness host or witness appliance. In this document each data fault domain will be referred to as a site.

Virtual SAN Stretched Cluster configurations can be spread across distances, provided bandwidth and latency requirements are met.

H5

Streched Cluster Configuration

General Guidelines

The bandwidth requirement between the main sites is highly dependent on the workload to be run on Virtual SAN, amount of data, and handling of failure scenarios. Under normal operating conditions, the basic bandwidth requirements are:

H5

Basic bandwidth requirements

Bandwidth Requirements Between Sites

Workloads are seldom all reads or writes, and normally include a general read to write ratio for each use case.

A good example of this would be a VDI workload. During peak utilization, VDI often behaves with a 70/30 write to read ratio. That is to say that 70% of the IO is due to write operations and 30% is due to read IO. As each solution has many factors, true ratios should be measured for each workload.

Using the general situation where a total IO profile requires 100,000 IOPS, of which 70% are write, and 30% are read, in a Stretched configuration, the write IO is what is sized against for inter-site bandwidth requirements.

With Stretched Clusters, read traffic is, by default, serviced by the site that the VM resides on. This concept is called Read Locality.

The required bandwidth between two data sites (B) is equal to Write bandwidth (Wb) * data multiplier (md) * resynchronization multiplier (mr):

B = Wb * md * mr

The data multiplier is comprised of overhead for Virtual SAN metadata traffic and miscellaneous related operations. VMware recommends a data multiplier of 1.4
The resynchronization multiplier is included to account for resynchronizing events. It is recommended to allocate bandwidth capacity on top of required bandwidth capacity for resynchronization events.

Making room for resynchronization traffic, an additional 25% is recommended.

Bandwidth Requirements Between Witness & Data Sites

Witness bandwidth isn’t calculated in the same way as inter-site bandwidth requirements. Witnesses do not maintain VM data, but rather only component metadata.
It is important to remember that data is stored on Virtual SAN in the form of objects. Objects are comprised of 1 or more components of items such as:

  • VM Home or namespace
  • VM Swap object
  • Virtual Disks
  • SnapshotsM

Objects can be split into more than 1 component when the size is >255GB, and/or a Number of Stripes (stripe width) policy is applied. Additionally, the number of objects/components for a given Virtual Machine is multiplied
when a Number of Failures to Tolerate (FTT) policy is applied for data protection and availability.

The required bandwidth between the Witness and each site is equal to ~1138 B x Number of Components / 5s.

Download

Download a full VMware® Virtual SAN™ Stretched Cluster – Bandwidth Sizing Guidance technical white paper.

Rating: 5/5


Jul 29

NSX-V Multi-site Options and Cross-VC NSX Design Guide

Created by Humair Ahmed on Jul 22, 2016 1:36 PM. Last modified by Humair Ahmed on Jul 25, 2016 2:19 PM.
This design guide is in initial draft status and feedback is welcome for next updated version release.
Please send feedback to hahmed@vmware.com.

The goal of this design guide is to outline several NSX solutions available for multi-site data center connectivity before digging deeper into the details of the Cross-VC NSX multi-site solution. Learn how Cross-VC NSX enables logical networking and security across multiple vCenter domains/sites and how it provides enhanced solutions for specific use cases. No longer is logical networking and security constrained to a single vCenter domain. Cross-VC NSX use cases, architecture, functionality, deployment models, design, and failure/recovery scenarios are discussed in detail.

This document is targeted toward virtualization and network architects interested in deploying VMware® NSX Network virtualization solution in a vSphere environment.

The design guide addresses the following topics:

  • Why Multi-site?
  • Traditional Multi-site Challenges
  • Why VMware NSX for Multi-site Data Center Solutions
  • NSX Multi-site Solution

Cross-VC NSX:

  • Use Cases
  • Architecture and Functionality
  • Deployment Models
  • Design Guidance
  • Failure/Recovery scenarios

Cross VC NSX Overview

VMware NSX provides network virtualization technology that decouples the networking services from the underlying physical infrastructure. By replicating traditional networking hardware constructs and moving the network intelligence to software, logical networks can be created efficiently over any basic IP network transport. The software based approach to networking provides the same benefits to the network as server virtualization provided for compute.
Pre-NSX 6.2, although NSX provides the flexibility, agility, efficiency and other benefits of network virtualization, the logical networking and security was constrained to the boundaries of one vCenter domain.

Figure 16 below shows NSX logical networking and security deployed within a single site and across a single vCenter domain.
VMware NSX Deployment site, single vCenter Server

Although it was possible to use NSX with one vCenter domain and stretch logical networking security across sites, the benefits of network virtualization with NSX was still limited to one vCenter domain. Figure 17 below shows multiple vCenter domains which happen to also be at different sites all requiring separate NSX controllers and having isolated logical networking and security.

Thanks to all the contributors and reviewers of this document.

This will also soon be posted on our NSX Technical Resources website (link below):
http://www.vmware.com/products/nsx/resources.html

Feedback and Comments to the Authors and the NSX Solution Team are highly appreciated.
– The VMware NSX Solution Team

Download

Download Multi-site Options and Cross-VC NSX Design Guide.pdf (15.5 MB).

Rating: 5/5


Jun 14

VMware vCenter Server 6.0 Performance and Best Practices

Introduction

VMware vCenter Server™ 6.0 substantially improves performance over previous vCenter Server versions. This paper demonstrates the improved performance in vCenter Server 6.0 compared to vCenter Server 5.5, and shows that vCenter Server with the embedded vPostgres database now performs as well as vCenter Server with an external database, even at vCenter Server’s scale limits. This paper also discusses factors that affect vCenter Server performance and provides best practices for vCenter Server performance.

What’s New in vCenter Server 6.0

vCenter Server 6.0 brings extensive improvements in performance and scalability over vCenter Server 5.5:

  • Operational throughput is over 100% higher, and certain operations are over 80% faster.
  • VMware vCenter Server™ Appliance™ now has the same scale limits as vCenter Server on Windows with an external database: 1,000 ESXi hosts, 10,000 powered-on virtual machines, and 15,000 registered virtual machines.
  • VMware vSphere® Web Client performance has improved, with certain pages over 90% faster.

In addition, vCenter Server 6.0 provides new deployment options:

  • Both vCenter Server on Windows and VMware vCenter Server Appliance provide an embedded vPostgres database as an alternative to an external database. (vPostgres replaces the SQL Server Express option that was available in previous vCenter versions.)
  • The embedded vPostgres database supports vCenter’s full scale limits when used with the vCenter Server Appliance.

Performance Comparison with vCenter Server 5.5

In order to demonstrate and quantify performance improvements in vCenter Server 6.0, this section compares 6.0 and 5.5 performance at several inventory and workload sizes. In addition, this section compares vCenter Server 6.0 on Windows to the vCenter Server Appliance at different inventory sizes, to highlight the larger scale limits in the Appliance in vCenter 6.0. Finally, this section illustrates the performance gained by provisioning vCenter with additional resources.

The workload for this comparison uses vSphere Web Services API clients to simulate a self-service cloud environment with a large amount of virtual machine “churn” (that is, frequently creating, deleting, and reconfiguring virtual machines). Each client repeatedly issues a series of inventory management and provisioning operations to vCenter Server. Table 1 lists the operations performed in this workload. The operations listed here were chosen from a sampling of representative customer data. Also, the inventories in this experiment used vCenter features including DRS, High Availability, and vSphere Distributed Switch. (See Appendix A for precise details on inventory configuration.)

Operations performed in performance comparison workload

Results

Figure 3 shows vCenter Server operation throughput (in operations per minute) for the heaviest workload for each inventory size. Performance has improved considerably at all sizes. For example, for the large inventory setup (Figure 3, right), operational throughput has increased from just over 600 operations per minute in vCenter Server 5.5 to over 1,200 operations per minute in vCenter Server 6.0 for Windows: an improvement of over 100%.
The other inventory sizes show similar gains in operational throughput.

vCenter Server 6.0 operation throughput

Figure 3. vCenter throughput at several inventory sizes, with heavy workload (higher is better). Throughput has increased at all inventory sizes in vCenter Server 6.0.

Figure 4 shows median latency across all operations in the heaviest workload for each inventory size. Just as with operational throughput in Figure 3, latency has improved at all inventory sizes. For example, for the large inventory setup (Figure 4, right), median operational latency has decreased from 19.4 seconds in vCenter Server 5.5 to 4.0 seconds in vCenter Server Appliance 6.0: a decrease of about 80%. The other inventory sizes also show large decreases in operational latency.

vCenter Server median latency at several inventory sizes

Figure 4. vCenter Server median latency at several inventory sizes, with heavy workload (lower is better). Latency has decreased at all inventory sizes in vCenter 6.0.

Download

Download a full VMware vCenter Server 6.0 Performance and Best Practices Technical White Paper

Rating: 5/5


Jun 13

NSX Distributed Firewalling Policy Rules Configuration Guide

Created by nikhilvmw on Sep 23, 2014 5:16 PM. Last modified by nikhilvmw on Nov 6, 2014 2:19 PM.
VMware NSX for vSphere, release 6.0.x.

This document covers how one can create security policy rules in VMware NSX. This will cover the different options of configuring security rules either through the Distributed Firewall or via the Service Composer User Interface. It will cover all the unique options NSX offers to create dynamic policies based on the infrastructure context.

Thanks to Francis Guillier, Kausum Kumar and Srini Nimmagadda for helping author this document.
Regards,
NSX Team

Introduction

VMware NSX Distributed Firewall (DFW) provides the capability to enforce firewalling functionality directly at the Virtual Machines (VM) vNIC layer. It is a core component of the micro-segmentation security model where east-west traffic can now be inspected at near line rate processing, preventing any lateral move type of attack.

This technical brief gives details about DFW policy rule configuration with NSX. Both DFW security policy objects and DFW consumption model will be discussed in this document.

We assume reader has already some knowledge on DFW and Service Composer functions. Please refer to the appropriate collateral if you need more information on these NSX components.

Distributed Firewall Object Grouping Model

NSX provides the capability to micro-segment your SDDC to provide an effective security posture. To implement micro-segmentation in your SDDC, NSX provides you various ways of grouping VMs and applying security policies to them. This document specifies in detail different ways groupings can be done and details on when you should use one over the other.
Security policy rules can be written in various ways as shown below:

Network Based Policies:

    This is the traditional approach of grouping based on L2 or L3 elements. Grouping can be based on MAC addresses or IP addresses or a combination of both. NSX supports this approach of grouping objects. The security team needs to aware of networking infrastructure to deploy network-based policies. There is a high probability of security rule sprawl as grouping based on dynamic attributes is not used. This method of grouping works great if you are migrating existing rules from a different vendor’s firewall.

Network Based Policies

When not to use this: In dynamic environments, e.g. Self-Service IT; Cloud automated deployments, where you are adding/deleting of VMs and application topologies at a rapid rate, MAC addressed based grouping approach may not be suitable as there will be delay between provisioning a VM and adding the MAC addresses to the group. If you have an environment with high mobility like vMotion and HA, L3/IP based grouping approaches may not be adequate either.

Infrastructure Based Policies:

    In this approach, grouping is based on SDDC infrastructure like vCenter clusters, logical switches, distributed port groups, etc. An example of this would be, clusters 1 to cluster 4
    are earmarked for PCI kind of applications. In such a case, grouping can be done based on cluster names and rules can be enforced based on these groups. Another example would be, if you know which logical switches in your environment are connected to which applications. E.g. App Tier Logical switch contains all VMs pertaining to application ‘X’. The security team needs to work closely with the vCenter administration team to understand logical and physical boundaries.

    When not to use this: If there are no physical or logical boundaries in your SDDC environment then this type of approach is not suitable. Also, you need to be very careful where you can deploy your applications. For example, if you would like to deploy a PCI workload to any cluster that has adequate compute resources available; the security posture cannot be tied to a cluster but should move with the application.

Application Based Policies:

    In this approach, grouping is based on the application type (e.g: VMs tagged as “Web_Servers”), application environment (e.g: all resources tagged as “Production_Zone”) and application security posture. The advantage of this approach is that the security posture of the application is not tied down to either network constructs or SDDC infrastructure. Security policies can move with the application irrespective of network or infrastructure boundaries. Policies can be templated and reusable across instances of same types of applications and workloads. You can use variety of mechanisms to group. The security team needs to be aware of only the application that it is trying to secure based on the policies. The security policies follow the application life cycle, i.e. comes alive when the application is deployed and is destroyed when the application is decommissioned.

    When not to use this: If the environment is pretty static without mobility and infrastructure functions are properly demarcated. You do not need to use application-based policies.

    Application-based policy approach will greatly aid in moving towards a Self-Service IT model. The Security team needs to be only aware of how to secure an application without knowing the underlying topology. Concise and reusable security rules will require application awareness. Thus a proper security posture can be developed via application based policies.

NSX Security-Groups

Security-Groups is a container-construct which allows to group vCenter objects into a common entity.
When defining a Security-Groups, multiple inclusion and exclusion can be used as shown in the diagram below:

NSX Security Groups

Download

Download a full VMware NSX DFW Policy Rules Configuration Technical White Paper

Rating: 5/5


Jun 13

Getting Started – Microsegmentation using NSX Distributed Firewall

Created by nikhilvmw on Sep 4, 2014 9:33 AM. Last modified by nikhilvmw on Sep 4, 2014 9:35 AM.
VMware NSX for vSphere, release 6.0.x.

Introduction

This document guides you through the step-by-step configuration and validation of NSX-v for microsegmentation services. Microsegmentation makes the data center network more secure by isolating each related group of virtual machines onto a distinct logical network segment, allowing the administrator to firewall traffic traveling from one segment of the data center to another (east-west traffic). This limits attackers’ ability to move laterally in the data center.

VMware NSX uniquely makes microsegmentation scalable, operationally feasible, and cost-effective. This security service provided to applications is now agnostic to virtual network topology. The security configurations we explain in this document can be used to secure traffic among VMs on different L2 broadcast domains or to secure traffic within a L2 broadcast domain.

Microsegmentation is powered by the Distributed Firewall (DFW) component of NSX. DFW operates at the ESXi hypervisor kernel layer and processes packets at near line-rate speed. Each VM has its own firewall rules and context. Workload mobility (vMotion) is fully supported with DFW, and active connections remain intact during the move.

This paper will guide you through two microsegmentation use cases and highlight steps to implement
them in your own environment.

Use Case and Solution Scenarios

This document presents two solution scenarios that use east-west firewalling to handle the use case of
securing network traffic inside the data center. The solution scenarios are:

  • Scenario 1: Microsegmentation for a three-tier application using three different layer-2 logical segments (here implemented using NSX logical switches connected over VXLAN tunnels):

Scenario 1 Logical View.

Figure 1 – Scenario 1 Logical View.


In Scenario 1, there are two VMs per tier, and each tier hosts a dedicated function (WEB / APP / DB
services). Traffic protection is provided within the tier and between tiers. Logical switches are used to
group VMs of same function together.

  • Scenario 2: Microsegmentation for a three-tier application using a single layer-2 logical segment:

Scenario 2 Logical View.

Figure 2 – Scenario 2 Logical View.


In Scenario 2, all VMs are located on same tier. Traffic protection is provided within tier and per function (WEB/ APP/ DB services). Security Groups (SG) are used to logically group VMs of same function together.

For both Scenario 1 and Scenario 2, the following security policies are enforced:

Security Policy

Security Policy


For Scenario 1, a logical switch object is used for source and destination fields. For Scenario 2, a Service Composer / Security Group object is used for source and destination fields. By using these vCenterdefined objects, we optimize the number of needed firewall rules irrespective of number of VMs per tier (or per function).

NOTE: TCP port 1433 simulates the SQL service.

Physical Topology

Physical Topology

Figure 3 – Physical Topology.


Two ESXi hosts in the same cluster are used. Each host has following connectivity to the physical
network:

  • one VLAN for management, vMotion, and storage. Communication between the ESXi host and the NSX Controllers also travels over this VLAN.
  • one VLAN for data traffic: VXLAN-tunneled, VM-to-VM data traffic uses this VLAN.

Locations:

  • Web-01, app-01 and db-01 VMs are hosted on the first ESXi host.
  • Web-02, app-02 and db-02 VMs are hosted on the second ESXi host.

The purpose of this implementation is to demonstrate complete decoupling of the physical infrastructure from the logical functions such as logical network segments, logical distributed routing and DFW.
In other words, microsegmentation is a logical service offered to an application infrastructure irrespective of physical component. There is no dependency on where each VM is physically located.

Download

Download a full Getting Started: Microsegmentation using NSX Distributed Firewall Guide

Rating: 5/5