Mar 12

VMware vSphere Virtual Machine Encryption Performance

Executive Summary

VMware vSphere® virtual machine encryption (VM encryption) is a feature introduced in vSphere 6.5 to enable the encryption of virtual machines. VM encryption provides security to VMDK data by encrypting I/Os from a virtual machine (which has the VM encryption feature enabled) before it gets stored in the VMDK. In this paper, we quantify the impact of using VM encryption on a VM’s I/O performance as well as on some of the VM provisioning operations like VM clone, power-on, and snapshot creation. We show that while VM encryption can lead to bottlenecks in I/O throughput and latency for ultra-high-performance devices (like a high-end NVMe drive) that can support hundreds of thousands of IOPS, for most regular types of storage, like enterprise class SSD or VMware vSAN™, the impact on I/O performance is very minimal.

Introduction

VM encryption supports the encryption of virtual machine files, virtual disk files, and core dump files. Some of the files associated with a virtual machine like log files, VM configuration files, and virtual disk descriptor files are not encrypted. This is because they mostly contain non-sensitive data and operations like disk management should be supported whether or not the underlying disk files are secured. VM encryption uses vSphere APIs for I/O filtering (VAIO), henceforth referred to as IOFilter.

IOFilter is an ESXi framework that allows the interception of VM I/Os in the virtual SCSI emulation (VSCSI) layer. On a high level, the VSCSI layer can be thought of as the layer in ESXi just below the VM and above the VMFS file system. The IOFilter framework enables developers, both VMware and third party vendors, to write filters to implement more services using VM I/Os like encryption, caching, and replication. This framework is implemented entirely in user space. This allows the VM I/Os to be isolated cleanly from the core architecture of ESXi, thereby eliminating any potential issues to the core functionality of the hypervisor. In case of any failure, only the VM in question would be affected. There can be multiple filters enabled for a particular VM or a VMDK, and these filters are typically chained in a manner shown below, so that I/Os are processed by each of these filters serially, one after the other, and then finally either passed down to VMFS or completed within one of the filters. This is illustrated in Figure 1.
IOFilter design

VM Encryption Overview

The primary purpose of VM encryption is to secure the data in VMDKs, such that when the VMDK data is accessed by any unauthorized entity, it gets only meaningless data. The VM that legitimately owns the VMDK has the necessary key to decrypt the data whenever read and then fed to the guest operating system. This is done using industry-standard encryption algorithms to secure this traffic with minimal overhead.
While VM encryption does not impose any new hardware requirements, using a processor that supports the AES-NI instruction set would speed up the encryption/decryption operation. In order to quantify the performance expectations on a traditional server without an AES-NI enabled processor, the results in this paper are from slightly older servers that do not support the AES-NI instruction set.

Design

VM Encryption Components
Figure 2 shows the various components involved as part of the VM encryption mechanism. It consists of an external key management server (KMS), the vCenter Server system, and an ESXi host or hosts. vCenter Server requests keys from an external KMS, which generates and stores the keys and passes them down to vCenter Server for distribution. An important aspect to note is that there is no “per-block hashing” for the virtual disk.

This means, VM encryption provides data protection against snooping and not against data corruption since there is no hash for detecting corruption and recovering from it. For more security, the encryption takes into account not only the encryption key, but also the block’s address. This means two blocks of a VMDK with the same content encrypt to different data.

Key Management

To visualize the mechanism of encryption (and decryption), we need to look at how the various elements in the security policy are laid out topologically. The KMS is the central server in this security-enabled landscape. Figure 3 shows a simplified topology.
Encryption-enabled vCenter Server (VC) topology width=

The KMS is a secure centralized repository of cryptographic keys. There can be more than one KMS configured with a vCenter Server. However, they need to be configured such that only KMSs that replicate keys between themselves (usually from the same vendor) should be added to the same KMS cluster. Otherwise each KMS should be added under a different KMS cluster. One of the KMS clusters must be designated as the default in vCenter Server. Only Key Management Interoperability Protocol (KMIP) v1.1 compliant KMSs are supported and vCenter Server is the client of KMS. Using KMIP enables vCenter Server to talk to any KMIP compliant KMS vendor. Before transacting with the KMS, vCenter Server must establish a trust connection with it, which needs to be done manually.

Download

Download a full VMware vSphere Virtual Machine Encryption Performance vSphere 6.5 Guide.

Rating: 5/5


Mar 11

VMware NSX Micro-segmentation Day 1 Book Available

VMware NSX Micro-segmentation

VMware NSX Micro-segmentation Day 1 is a concise book that provides the necessary information to guide organizations interested in bolstering their security posture through the implementation of micro-segmentation. VMware NSX Micro-segmentation Day 1 highlights the importance of micro-segmentation in enabling better data center cyber hygiene. It also provides the knowledge and guidance needed to effectively design and implement a data center security strategy around micro-segmentation.

VMware NSX Micro-segmentation covers the following topics:

  • Micro-segmentation Definition
  • Micro-segmentation and Cybersecurity standards
  • NSX components enabling micro-segmentation
  • Design considerations for micro-segmentation
  • Creating a grouping framework for micro-segmentation
  • Policy creation tools for micro-segmentation

Download

So be sure to download a copy today and learn more about micro-segmentation and how to make it a foundational part of your security strategy. If you are attending RSA 2017, there will be promotional copies being handed out at the VMware booth, so be sure to stop by!

Download a full VMware NSX Micro-segmentation Day 1 Book.

Rating: 5/5


Mar 11

VMware vSphere Encrypted vMotion Architecture, Performance and Best Practices

Executive Summary

With the rise in popularity of hybrid cloud computing, where VM-sensitive data leaves the traditional IT environment and traverses over the public networks, IT administrators and architects need a simple and secure way to protect critical VM data that traverses across clouds and over long distances.

The Encrypted vMotion feature available in VMware vSphere® 6.5 addresses this challenge by introducing a software approach that provides end-to-end encryption for vMotion network traffic. The feature encrypts all the vMotion data inside the vmkernel by using the most widely used AES-GCM encryption standards, and thereby provides data confidentiality, integrity, and authenticity even if vMotion traffic traverses untrusted network links.

Experiments conducted in the VMware performance labs using industry-standard workloads show the following:

  • vSphere 6.5 Encrypted vMotion performs nearly the same as regular, unencrypted vMotion.
  • The CPU cost of encrypting vMotion traffic is very moderate, thanks to the performance optimizations added to the vSphere 6.5 vMotion code path.
  • vSphere 6.5 Encrypted vMotion provides the proven reliability and performance guarantees of regular, unencrypted vMotion, even across long.

Introduction

VMware vSphere® vMotion® [1] provides the ability to migrate a running virtual machine from one vSphere host to another, with no perceivable impact to the virtual machine’s performance. vMotion brings enormous benefits to administrators—it reduces server downtime and facilitates automatic load-balancing.

During migration, the entire memory and disk state associated with a VM, along with its metadata, are transferred over the vMotion network. It is possible during VM migration for an attacker with sufficient network privileges to compromise a VM by modifying its memory contents during the transit to subvert the VM’s applications or its guest operating system. Due to this possible security risk, VMware highly recommended administrators use an isolated or secured network for vMotion traffic, separate from other datacenter networks such as the management network or provisioning network. This protected the VM’s sensitive data as it traversed over a secure network.

Even though this recommended approach adds slightly higher network and administrative complexity, it works well in a traditional IT environment where the customer owns the complete network infrastructure and can secure it. In a hybrid cloud, however, workloads move dynamically between clouds and datacenters over secured and unsecured network links. Therefore, it is essential to secure sensitive vMotion traffic at the network endpoints. This protects critical VM data even as the vMotion traffic leaves the traditional IT environment and traverses over the public networks.

vSphere 6.5 introduces Encrypted vMotion, which provides end-to-end encryption of vMotion traffic and protects VM data from eavesdropping occurrences on untrusted network links. Encrypted vMotion provides complete confidentiality, integrity, and authenticity of the data transferred over a vMotion network without any requirement for dedicated networks or additional hardware.

The sections that follow describe:

  • vSphere 6.5 Encrypted vMotion technology and architecture
  • How to configure Encrypted vMotion from the vSphere Client
  • Performance implications of encrypting vMotion traffic using real-life workload scenarios
  • Best practices for deployment

Encrypted vMotion Architecture

vMotion uses TCP as the transport protocol for migrating the VM data. To secure VM migration, vSphere 6.5 encrypts all the vMotion traffic, including the TCP payload and vMotion metadata, using the most widely used AES-GCM encryption standard algorithms, provided by the FIPS-certified vmkernel vmkcrypto module.
Workflow Encrypted vMotion

Encryption Protocol

Encrypted vMotion does not rely on the Secure Sockets Layer (SSL) or Internet Protocol Security (IPsec) technologies for securing vMotion traffic. Instead, it implements a custom encrypted protocol above the TCP layer. This is done primarily for performance, but also for reasons explained below.
SSL is compute intensive and completely implemented in user space, while vMotion, which constitutes core ESXi, executes in kernel space. This means, if vMotion were to rely on SSL, each encryption/decryption call would need to traverse across kernel and user spaces, thereby resulting in excessive performance overhead. Using the encryption algorithms provided by the vmkernel vmkcrypto module enables vMotion to avoid such a performance penalty.

Although IPSec can be used to secure vMotion traffic, its usability is limited in the vSphere environment because ESXi hosts support IPsec only for IPv6 traffic, but not for IPv4 traffic. Besides, implementing a custom protocol above the TCP layer gives vMotion the ability to create the appropriate number of vMotion worker threads, and coordinate efficiently among them to spread the encryption/decryption CPU load across multiple cores.

Download

Download a full VMware vSphere Encrypted vMotion Architecture, Performance and Best Practices Study.

Rating: 5/5


Mar 11

DRS Performance in VMware vSphere 6.5

Introduction

VMware vSphere® Distributed Resource Scheduler™ (DRS) is more than a decade old and is constantly innovating with every new version. In vSphere 6.5, DRS comes with many new features and performance improvements to ensure more efficient load balancing and VM placement, faster response times, and simplified cluster management.
In this paper, we cover some of the key features and performance improvements to highlight the more efficient, faster, and lighter DRS in vSphere 6.5.

New Features

Predictive DRS
Historically, vSphere DRS has been reactive—it reacts to any changes in VM workloads and migrates the VMs to distribute load across different hosts. In vSphere 6.5, with VMware vCenter Server® working together with VMware vRealize® Operations™ (vROps), DRS can act upon predicted future changes in workloads. This helps DRS migrate VMs proactively and makes room in the cluster to accommodate future workload demand.
For example, if your VMs’ workload is going to spike at 9am every day, predictive DRS will be able to detect this pattern before-hand based on historical data from vROPs, and can prepare the cluster resources by using either of the following techniques:

  • Migrating the VMs to different hosts to accommodate the future workload and avoid host over-commitment
  • Bringing back a new host from stand-by mode using VMware vSphere® Distributed Power Management™ (DPM) to accommodate the future demand

How It Works

To enable predictive DRS, you need to link vCenter Server to a vROps instance (that supports predictive DRS), which monitors the resource usage pattern of VMs and generates predictions. Once vROps starts monitoring VM workloads, it generates predictions after a specified learning period. The generated predictions are then provided to vCenter Server for DRS to consume.

Once the VMs’ workload predictions are available, DRS evaluates the demand of a VM based on its current resource usage and predicted future resource usage.

    Demand of a VM = Max (current usage, predicted future usage)

Considering the maximum of current and future resource usage ensures that DRS does not clip any VM’s current demand in favor of its future demand. For the VMs which do not have predictions, DRS computes resource
demand based on only the current resource usage.

Look Ahead Interval

The predictions that DRS gets from vROps are always for a certain period of time, starting from the current
time. This period is known as the “look ahead interval” for predictive DRS. This is by default 60 minutes starting from the current time, which means, by default the predictions will always be for the next one hour. So if there is any sudden spike that is going to happen in the next one hour, predictive DRS will detect it and will prepare the cluster to handle it.

Network-Aware DRS

Traditionally, DRS has always considered the compute resource (CPU and memory) utilizations of hosts and VMs for balancing load across hosts and placing VMs during power-on. This generally works well because in many cases, CPU and memory are the most important resources needed for good application performance.
However, since network availability is not considered in this approach, sometimes this results in placing or
migrating a VM to a host which is already network saturated. This might have some performance impact on the application if it happens to be network sensitive.
DRS is network-aware in vSphere 6.5, so it now considers the network utilization of host and network usage requirements of VMs during initial placement and load balancing. This makes DRS load balancing and initial placement of VMs more effective.

How It Works

During initial placement and load balancing, DRS first comes up with the list of best possible hosts to run a VM based on compute resources and then uses some heuristics to decide the final host based on VM and host network utilization’s. This makes sure the VM gets the network resources it needs along with the compute resources.

The goal of network-aware DRS in vSphere 6.5 is only to make sure the host has sufficient network resources available along with compute resources required by the VM. So, unlike regular DRS, which balances the CPU and memory load, network-aware DRS does not balance the network load in the cluster, which means it will not trigger a vMotion when there is network load imbalance.

Download

Download a full DRS PERFORMANCE in VMware vSphere 6.5 Study Guide.

Rating: 5/5


Nov 22

The VMware Cross-Cloud Architecture

See how VMware’s Cross-Cloud Architecture helps you avoid cloud silos, giving you both freedom and control in IT infrastructure.

Rating: 5/5


Nov 01

VMware® Virtual SAN™ 6.2 Stretched Cluster & 2 Node Guide

Jase McCarty
Storage and Availability Business Unit
VMware
v 6.2.0 / March 2016 / version 0.30

Introduction

VMware Virtual SAN 6.1, shipping with vSphere 6.0 Update 1, introduced a new feature called VMware Virtual SAN Stretched Cluster. Virtual SAN Stretched Cluster is a specific configuration implemented in environments where disaster/downtime avoidance is a key requirement. This guide was developed to provide additional insight and information for installation, configuration and operation of a Virtual SAN Stretched Cluster infrastructure in conjunction with VMware vSphere. This guide will explain how vSphere handles specific failure scenarios and discuss various design considerations and operational procedures.

Virtual SAN Stretched Clusters with Witness Host refers to a deployment where a user sets up a Virtual SAN cluster with 2 active/active sites with an identical number of ESXi hosts distributed evenly between the two sites. The sites are connected via a high bandwidth/low latency link.

The third site hosting the Virtual SAN Witness Host is connected to both of the active/active data-sites. This connectivity can be via low bandwidth/high latency links.

H5

Streched Cluster Configuration

Each site is configured as a Virtual SAN Fault Domain. The nomenclature used to describe a Virtual SAN Stretched Cluster configuration is X+Y+Z, where X is the number of ESXi hosts at data site A, Y is the number of ESXi hosts at data site B, and Z is the number of witness hosts at site C. Data sites are where virtual machines are deployed. The minimum supported configuration is 1+1+1 (3 nodes). The maximum configuration is 15+15+1 (31 nodes).

In Virtual SAN Stretched Clusters, there is only one witness host in any configuration. A virtual machine deployed on a Virtual SAN Stretched Cluster will have one copy of its data on site A, a second copy of its data on site B and any witness components placed on the witness host in site C. This configuration is achieved through fault domains alongside hosts and VM groups, and affinity rules. In the event of a complete site failure, there will be a full copy of the virtual machine data as well as greater than 50% of the components available. This will allow the virtual machine to remain available on the Virtual SAN datastore. If the virtual machine needs to be restarted on the other site, vSphere HA will handle this task.

Support Statements

vSphere versions

Virtual SAN Stretched Cluster configurations require vSphere 6.0 Update 1 (U1) or greater. This implies both vCenter Server 6.0 U1 and ESXi 6.0 U1. This version of vSphere includes Virtual SAN version 6.1. This is the minimum version required for Virtual SAN Stretched Cluster support.

vSphere & Virtual SAN

Virtual SAN version 6.1 introduced features including both All-Flash and Stretched Cluster functionality. There are no limitations on the edition of vSphere used for Virtual SAN. However, for Virtual SAN Stretched Cluster functionality, vSphere DRS is very desirable. DRS will provide initial placement assistance, and will also automatically migrate virtual machines to their correct site in accordance to Host/VM affinity rules. It can also help will locating virtual machines to their correct site when a site recovers after a failure. Otherwise the administrator will have to manually carry out these tasks. Note that DRS is only available in Enterprise edition and higher of vSphere.

Hybrid and All-Flash support

Virtual SAN Stretched Cluster is supported on both hybrid configurations (hosts with local storage comprised of both magnetic disks for capacity and flash devices for cache) and all-flash configurations (hosts with local storage made up of flash devices for capacity and flash devices for cache).

On-disk formats

VMware supports Virtual SAN Stretched Cluster with the v2 on-disk format only. The v1 on-disk format is based on VMFS and is the original on-disk format used for Virtual SAN. The v2 on-disk format is the version which comes by default with Virtual SAN version 6.x. Customers that upgraded from the original Virtual SAN 5.5 to Virtual SAN 6.0 may not have upgraded the on-disk format for v1 to v2, and are thus still using v1. VMware recommends upgrading the on-disk format to v2 for improved performance and scalability, as well as stretched cluster support. In Virtual SAN 6.2 clusters, the v3 on-disk format allows for additional features, discussed later, specific to 6.2.

Features supported on VSAN but not VSAN Stretched Clusters

The following are a list of products and features support on Virtual SAN but not on a stretched cluster implementation of Virtual SAN.

  • SMP-FT, the new Fault Tolerant VM mechanism introduced in vSphere 6.0, is supported on standard VSAN 6.1 deployments, but it is not supported on stretched cluster VSAN deployments at this time. *The exception to this rule, is when using 2 Node configurations in the same physical location.
  • The maximum value for NumberOfFailuresToTolerate in a Virtual SAN Stretched Cluster configuration is 1. This is the limit due to the maximum number of Fault Domains being 3.
  • In a Virtual SAN Stretched Cluster, there are only 3 Fault Domains. These are typically referred to as the Preferred, Secondary, and Witness Fault Domains. Standard Virtual SAN configurations can be comprised of up to 32 Fault Domains.
  • The Erasure Coding feature introduced in Virtual SAN 6.2 requires 4 Fault Domains for RAID5 type protection and 6 Fault Domains for RAID6 type protection. Because Stretched Cluster configurations only have 3 Fault Domains, Erasure Coding is not supported on Stretched Clusters at this time.

Download

Download a full VMware Virtual SAN 6.2 Stretched Cluster Guide.

Rating: 5/5


Jul 29

NSX-V Multi-site Options and Cross-VC NSX Design Guide

Created by Humair Ahmed on Jul 22, 2016 1:36 PM. Last modified by Humair Ahmed on Jul 25, 2016 2:19 PM.
This design guide is in initial draft status and feedback is welcome for next updated version release.
Please send feedback to hahmed@vmware.com.

The goal of this design guide is to outline several NSX solutions available for multi-site data center connectivity before digging deeper into the details of the Cross-VC NSX multi-site solution. Learn how Cross-VC NSX enables logical networking and security across multiple vCenter domains/sites and how it provides enhanced solutions for specific use cases. No longer is logical networking and security constrained to a single vCenter domain. Cross-VC NSX use cases, architecture, functionality, deployment models, design, and failure/recovery scenarios are discussed in detail.

This document is targeted toward virtualization and network architects interested in deploying VMware® NSX Network virtualization solution in a vSphere environment.

The design guide addresses the following topics:

  • Why Multi-site?
  • Traditional Multi-site Challenges
  • Why VMware NSX for Multi-site Data Center Solutions
  • NSX Multi-site Solution

Cross-VC NSX:

  • Use Cases
  • Architecture and Functionality
  • Deployment Models
  • Design Guidance
  • Failure/Recovery scenarios

Cross VC NSX Overview

VMware NSX provides network virtualization technology that decouples the networking services from the underlying physical infrastructure. By replicating traditional networking hardware constructs and moving the network intelligence to software, logical networks can be created efficiently over any basic IP network transport. The software based approach to networking provides the same benefits to the network as server virtualization provided for compute.
Pre-NSX 6.2, although NSX provides the flexibility, agility, efficiency and other benefits of network virtualization, the logical networking and security was constrained to the boundaries of one vCenter domain.

Figure 16 below shows NSX logical networking and security deployed within a single site and across a single vCenter domain.
VMware NSX Deployment site, single vCenter Server

Although it was possible to use NSX with one vCenter domain and stretch logical networking security across sites, the benefits of network virtualization with NSX was still limited to one vCenter domain. Figure 17 below shows multiple vCenter domains which happen to also be at different sites all requiring separate NSX controllers and having isolated logical networking and security.

Thanks to all the contributors and reviewers of this document.

This will also soon be posted on our NSX Technical Resources website (link below):
http://www.vmware.com/products/nsx/resources.html

Feedback and Comments to the Authors and the NSX Solution Team are highly appreciated.
– The VMware NSX Solution Team

Download

Download Multi-site Options and Cross-VC NSX Design Guide.pdf (15.5 MB).

Rating: 5/5


Jun 14

VMware vCenter Server 6.0 Performance and Best Practices

Introduction

VMware vCenter Server™ 6.0 substantially improves performance over previous vCenter Server versions. This paper demonstrates the improved performance in vCenter Server 6.0 compared to vCenter Server 5.5, and shows that vCenter Server with the embedded vPostgres database now performs as well as vCenter Server with an external database, even at vCenter Server’s scale limits. This paper also discusses factors that affect vCenter Server performance and provides best practices for vCenter Server performance.

What’s New in vCenter Server 6.0

vCenter Server 6.0 brings extensive improvements in performance and scalability over vCenter Server 5.5:

  • Operational throughput is over 100% higher, and certain operations are over 80% faster.
  • VMware vCenter Server™ Appliance™ now has the same scale limits as vCenter Server on Windows with an external database: 1,000 ESXi hosts, 10,000 powered-on virtual machines, and 15,000 registered virtual machines.
  • VMware vSphere® Web Client performance has improved, with certain pages over 90% faster.

In addition, vCenter Server 6.0 provides new deployment options:

  • Both vCenter Server on Windows and VMware vCenter Server Appliance provide an embedded vPostgres database as an alternative to an external database. (vPostgres replaces the SQL Server Express option that was available in previous vCenter versions.)
  • The embedded vPostgres database supports vCenter’s full scale limits when used with the vCenter Server Appliance.

Performance Comparison with vCenter Server 5.5

In order to demonstrate and quantify performance improvements in vCenter Server 6.0, this section compares 6.0 and 5.5 performance at several inventory and workload sizes. In addition, this section compares vCenter Server 6.0 on Windows to the vCenter Server Appliance at different inventory sizes, to highlight the larger scale limits in the Appliance in vCenter 6.0. Finally, this section illustrates the performance gained by provisioning vCenter with additional resources.

The workload for this comparison uses vSphere Web Services API clients to simulate a self-service cloud environment with a large amount of virtual machine “churn” (that is, frequently creating, deleting, and reconfiguring virtual machines). Each client repeatedly issues a series of inventory management and provisioning operations to vCenter Server. Table 1 lists the operations performed in this workload. The operations listed here were chosen from a sampling of representative customer data. Also, the inventories in this experiment used vCenter features including DRS, High Availability, and vSphere Distributed Switch. (See Appendix A for precise details on inventory configuration.)

Operations performed in performance comparison workload

Results

Figure 3 shows vCenter Server operation throughput (in operations per minute) for the heaviest workload for each inventory size. Performance has improved considerably at all sizes. For example, for the large inventory setup (Figure 3, right), operational throughput has increased from just over 600 operations per minute in vCenter Server 5.5 to over 1,200 operations per minute in vCenter Server 6.0 for Windows: an improvement of over 100%.
The other inventory sizes show similar gains in operational throughput.

vCenter Server 6.0 operation throughput

Figure 3. vCenter throughput at several inventory sizes, with heavy workload (higher is better). Throughput has increased at all inventory sizes in vCenter Server 6.0.

Figure 4 shows median latency across all operations in the heaviest workload for each inventory size. Just as with operational throughput in Figure 3, latency has improved at all inventory sizes. For example, for the large inventory setup (Figure 4, right), median operational latency has decreased from 19.4 seconds in vCenter Server 5.5 to 4.0 seconds in vCenter Server Appliance 6.0: a decrease of about 80%. The other inventory sizes also show large decreases in operational latency.

vCenter Server median latency at several inventory sizes

Figure 4. vCenter Server median latency at several inventory sizes, with heavy workload (lower is better). Latency has decreased at all inventory sizes in vCenter 6.0.

Download

Download a full VMware vCenter Server 6.0 Performance and Best Practices Technical White Paper

Rating: 5/5


Jun 13

NSX Distributed Firewalling Policy Rules Configuration Guide

Created by nikhilvmw on Sep 23, 2014 5:16 PM. Last modified by nikhilvmw on Nov 6, 2014 2:19 PM.
VMware NSX for vSphere, release 6.0.x.

This document covers how one can create security policy rules in VMware NSX. This will cover the different options of configuring security rules either through the Distributed Firewall or via the Service Composer User Interface. It will cover all the unique options NSX offers to create dynamic policies based on the infrastructure context.

Thanks to Francis Guillier, Kausum Kumar and Srini Nimmagadda for helping author this document.
Regards,
NSX Team

Introduction

VMware NSX Distributed Firewall (DFW) provides the capability to enforce firewalling functionality directly at the Virtual Machines (VM) vNIC layer. It is a core component of the micro-segmentation security model where east-west traffic can now be inspected at near line rate processing, preventing any lateral move type of attack.

This technical brief gives details about DFW policy rule configuration with NSX. Both DFW security policy objects and DFW consumption model will be discussed in this document.

We assume reader has already some knowledge on DFW and Service Composer functions. Please refer to the appropriate collateral if you need more information on these NSX components.

Distributed Firewall Object Grouping Model

NSX provides the capability to micro-segment your SDDC to provide an effective security posture. To implement micro-segmentation in your SDDC, NSX provides you various ways of grouping VMs and applying security policies to them. This document specifies in detail different ways groupings can be done and details on when you should use one over the other.
Security policy rules can be written in various ways as shown below:

Network Based Policies:

    This is the traditional approach of grouping based on L2 or L3 elements. Grouping can be based on MAC addresses or IP addresses or a combination of both. NSX supports this approach of grouping objects. The security team needs to aware of networking infrastructure to deploy network-based policies. There is a high probability of security rule sprawl as grouping based on dynamic attributes is not used. This method of grouping works great if you are migrating existing rules from a different vendor’s firewall.

Network Based Policies

When not to use this: In dynamic environments, e.g. Self-Service IT; Cloud automated deployments, where you are adding/deleting of VMs and application topologies at a rapid rate, MAC addressed based grouping approach may not be suitable as there will be delay between provisioning a VM and adding the MAC addresses to the group. If you have an environment with high mobility like vMotion and HA, L3/IP based grouping approaches may not be adequate either.

Infrastructure Based Policies:

    In this approach, grouping is based on SDDC infrastructure like vCenter clusters, logical switches, distributed port groups, etc. An example of this would be, clusters 1 to cluster 4
    are earmarked for PCI kind of applications. In such a case, grouping can be done based on cluster names and rules can be enforced based on these groups. Another example would be, if you know which logical switches in your environment are connected to which applications. E.g. App Tier Logical switch contains all VMs pertaining to application ‘X’. The security team needs to work closely with the vCenter administration team to understand logical and physical boundaries.

    When not to use this: If there are no physical or logical boundaries in your SDDC environment then this type of approach is not suitable. Also, you need to be very careful where you can deploy your applications. For example, if you would like to deploy a PCI workload to any cluster that has adequate compute resources available; the security posture cannot be tied to a cluster but should move with the application.

Application Based Policies:

    In this approach, grouping is based on the application type (e.g: VMs tagged as “Web_Servers”), application environment (e.g: all resources tagged as “Production_Zone”) and application security posture. The advantage of this approach is that the security posture of the application is not tied down to either network constructs or SDDC infrastructure. Security policies can move with the application irrespective of network or infrastructure boundaries. Policies can be templated and reusable across instances of same types of applications and workloads. You can use variety of mechanisms to group. The security team needs to be aware of only the application that it is trying to secure based on the policies. The security policies follow the application life cycle, i.e. comes alive when the application is deployed and is destroyed when the application is decommissioned.

    When not to use this: If the environment is pretty static without mobility and infrastructure functions are properly demarcated. You do not need to use application-based policies.

    Application-based policy approach will greatly aid in moving towards a Self-Service IT model. The Security team needs to be only aware of how to secure an application without knowing the underlying topology. Concise and reusable security rules will require application awareness. Thus a proper security posture can be developed via application based policies.

NSX Security-Groups

Security-Groups is a container-construct which allows to group vCenter objects into a common entity.
When defining a Security-Groups, multiple inclusion and exclusion can be used as shown in the diagram below:

NSX Security Groups

Download

Download a full VMware NSX DFW Policy Rules Configuration Technical White Paper

Rating: 5/5


Jun 13

Getting Started – Microsegmentation using NSX Distributed Firewall

Created by nikhilvmw on Sep 4, 2014 9:33 AM. Last modified by nikhilvmw on Sep 4, 2014 9:35 AM.
VMware NSX for vSphere, release 6.0.x.

Introduction

This document guides you through the step-by-step configuration and validation of NSX-v for microsegmentation services. Microsegmentation makes the data center network more secure by isolating each related group of virtual machines onto a distinct logical network segment, allowing the administrator to firewall traffic traveling from one segment of the data center to another (east-west traffic). This limits attackers’ ability to move laterally in the data center.

VMware NSX uniquely makes microsegmentation scalable, operationally feasible, and cost-effective. This security service provided to applications is now agnostic to virtual network topology. The security configurations we explain in this document can be used to secure traffic among VMs on different L2 broadcast domains or to secure traffic within a L2 broadcast domain.

Microsegmentation is powered by the Distributed Firewall (DFW) component of NSX. DFW operates at the ESXi hypervisor kernel layer and processes packets at near line-rate speed. Each VM has its own firewall rules and context. Workload mobility (vMotion) is fully supported with DFW, and active connections remain intact during the move.

This paper will guide you through two microsegmentation use cases and highlight steps to implement
them in your own environment.

Use Case and Solution Scenarios

This document presents two solution scenarios that use east-west firewalling to handle the use case of
securing network traffic inside the data center. The solution scenarios are:

  • Scenario 1: Microsegmentation for a three-tier application using three different layer-2 logical segments (here implemented using NSX logical switches connected over VXLAN tunnels):

Scenario 1 Logical View.

Figure 1 – Scenario 1 Logical View.


In Scenario 1, there are two VMs per tier, and each tier hosts a dedicated function (WEB / APP / DB
services). Traffic protection is provided within the tier and between tiers. Logical switches are used to
group VMs of same function together.

  • Scenario 2: Microsegmentation for a three-tier application using a single layer-2 logical segment:

Scenario 2 Logical View.

Figure 2 – Scenario 2 Logical View.


In Scenario 2, all VMs are located on same tier. Traffic protection is provided within tier and per function (WEB/ APP/ DB services). Security Groups (SG) are used to logically group VMs of same function together.

For both Scenario 1 and Scenario 2, the following security policies are enforced:

Security Policy

Security Policy


For Scenario 1, a logical switch object is used for source and destination fields. For Scenario 2, a Service Composer / Security Group object is used for source and destination fields. By using these vCenterdefined objects, we optimize the number of needed firewall rules irrespective of number of VMs per tier (or per function).

NOTE: TCP port 1433 simulates the SQL service.

Physical Topology

Physical Topology

Figure 3 – Physical Topology.


Two ESXi hosts in the same cluster are used. Each host has following connectivity to the physical
network:

  • one VLAN for management, vMotion, and storage. Communication between the ESXi host and the NSX Controllers also travels over this VLAN.
  • one VLAN for data traffic: VXLAN-tunneled, VM-to-VM data traffic uses this VLAN.

Locations:

  • Web-01, app-01 and db-01 VMs are hosted on the first ESXi host.
  • Web-02, app-02 and db-02 VMs are hosted on the second ESXi host.

The purpose of this implementation is to demonstrate complete decoupling of the physical infrastructure from the logical functions such as logical network segments, logical distributed routing and DFW.
In other words, microsegmentation is a logical service offered to an application infrastructure irrespective of physical component. There is no dependency on where each VM is physically located.

Download

Download a full Getting Started: Microsegmentation using NSX Distributed Firewall Guide

Rating: 5/5