Jun 14

Virtual Volumes part 2: Architecture

This video covers the Architecture of Virtual Volumes in vSphere 6.

Rating: 5/5


Jun 14

Virtual Volumes part 1: Concepts

This video covers the concepts of Virtual Volumes in vSphere 6.

Jun 14

Configuring vSphere Flash Read Cache

Senior Software Engineer Peter Shepherd shows you how Flash Read Cache™ lets you accelerate virtual machine performance through the use of host resident flash devices as a cache.

Jun 13

Deploying a Centralized VMware vCenter™ Single Sign-On™ Server with a Network Load Balancer

Overview

With the release of VMware vSphere® 5.5 and VMware® vCenter Server™ 5.5, multiple components deliver the vCenter Server management solution. One component, VMware vCenter™ Single Sign-On™ server, offers an optional deployment configuration that enables the centralization of vCenter Single Sign-On services for multiple local solutions such as vCenter Server. If not architected correctly, centralization can increase risk, so use of vCenter Single Sign-On server is highly recommended.

This paper highlights the high-availability options for a centralized vCenter Single Sign-On environment and provides a reference guide for deploying one of the more common centralized vCenter Single Sign-On configurations with an external network load balancer (NLB).

When to Centralize vCenter Single Sign-On Server

VMware highly recommends deploying all vCenter Server components into a single virtual machine—excluding the vCenter Server database. However, large enterprise customers running many vCenter Server instances within a single physical location can simplify vCenter Single Sign-On architecture and management by reducing the footprint and required resources and specifying a dedicated vCenter Single Sign-On environment for all resources in each physical location.

For vSphere 5.5, as a general guideline, VMware recommends centralization of vCenter Single Sign-On server when eight or more vCenter Server instances are present in a given location.

A Centralized vCenter Single Sign-On Server Environment

Figure 1 – A Centralized vCenter Single Sign-On Server Environment.

Centralized Single Sign-On High-Availability Options

The absence of vCenter Single Sign-On server greatly impacts the management, accessibility, and operations within a vSphere environment. The type of availability required is based on the user’s recovery time objective (RTO), and VMware solutions can offer various levels of protection.

VMware vSphere Data Protection

VMware vSphere Data Protection™ provides a disk-level backup-and-restore capability utilizing storage-based snapshots. With the release of vSphere Data Protection 5.5, VMware now provides the option of host-level restore. Users can back up vCenter Single Sign-On server virtual machines using vSphere Data Protection and can restore later as necessary to a specified vSphere host.

VMware vSphere High Availability

When deploying a centralized vCenter Single Sign-On server to a vSphere virtual machine environment, users can also deploy VMware vSphere High Availability (vSphere HA) to enable recovery of the vCenter Single Sign-On server virtual machines. vSphere HA monitors virtual machines via heartbeats from the VMware Tools™ package, and it can initiate a reboot of the virtual machine when the heartbeat no longer is being received or when the vSphere host has failed.

VMware vCenter Server Heartbeat

VMware vCenter Server Heartbeat™ provides a richer availability model for the monitoring and redundancy of vCenter Server and its components. It places a centralized vCenter Single Sign-On server into an active–passive architecture, monitors the application, and provides an up-to-date passive node for recovery during a vSphere host, virtual machine, or application failure.

Network Load Balancer

A VMware or third-party NLB can be configured to allow SSL pass-through communications to a number of local vCenter Single Sign-On server instances and provide a distributed and redundant vCenter Single Sign-On solution. Although VMware provides NLB capability in some of its optional products, such as VMware vCloud® Networking and Security™, there also are third-party solutions available in the marketplace. VMware does not provide support for third-party NLB solutions.

Deploying vCenter Single Sign-On Server with a Network Load Balancer

Preinstallation Checklist

The guidance provided within this document will reference the following details:

Centralized vCenter Single Sign-On Requirements

Table 1 – Centralized vCenter Single Sign-On Requirements

vCenter Single Sign-On Server with a Network Load Balancer

Figure 2 – Example of a vCenter Single Sign-On Server with a Network Load Balancer

Download

Download a full Deploying a Centralized VMware vCenter™ Single Sign-On™ Server with a Network Load Balancer – Technical Reference

Rating: 5/5


Jun 11

Oracle Databases on VMware Best Practices Guide

Introduction

This Oracle Databases on VMware Best Practices Guide provides best practice guidelines for deploying Oracle databases on VMware vSphere®. The recommendations in this guide are not specific to any particular set of hardware, or size and scope of any particular Oracle database implementation. The examples and considerations provide guidance, but do not represent strict design requirements.

The successful deployment of Oracle on vSphere 5.x/6.0 is not significantly different from deploying Oracle on physical servers. DBAs can fully leverage their current skill set while also delivering the benefits associated with virtualization.

In addition to this guide, VMware has created separate best practice documents for storage, networking, and performance.

This document also includes information from two white papers, Performance Best Practice for VMware vSphere 5.5 and Performance Best Practices for VMware vSphere 6.0

VMware Support for Oracle Databases on vSphere

Oracle has a support statement for VMware products (MyOracleSupport 249212.1). While there has been much public discussion about Oracle’s perceived position on support for VMware virtualization, experience shows that Oracle Support upholds its commitment to customers, including those using VMware virtualization in conjunction with Oracle products.

VMware is also an Oracle customer. The E-Business Suite and Siebel implementations of VMware IT are virtualized. VMware routinely submits and receives assistance with issues for Oracle running on VMware virtual infrastructure. The MyOracleSupport (MetaLink) Document ID 249212.1 provides the specifics of Oracle’s support commitment to VMware. Gartner, IDC, and others also have documents available to their subscribers that specifically address this policy.

VMware Oracle Support Process

VMware support will accept tickets for any Oracle-related issue reported by a customer and will help drive the issue to resolution. To augment Oracle’s support document, VMware also has a total ownership policy for customers with Oracle issues as described in the letter at VMware® Oracle Support Affirmation.

By being accountable, VMware Support will drive the issue to resolution regardless of which vendor (VMware, Oracle or other) is responsible for the resolution. In most cases, reported issues can be resolved through configuration changes, bug fixes, or feature enhancements by one of the involved vendors. VMware is committed to its customer’s success and supports their choice to run Oracle software in modern, virtualized environments. For further information, see https://www.vmware.com/support/policies/oracle-support

VMware vSphere Oracle Support Process

Figure 1 – VMware vSphere Oracle Support Process

Download

Download a full Oracle Databases on VMware Best Practices Guide

Rating: 5/5


Jun 10

VMware vCenter Server™ 6.0 Deployment Guide

Introduction

The VMware vCenter Server™ 6.0 release introduces new, simplified deployment models. The components that make up a vCenter Server installation have been grouped into two types: embedded and external. Embedded refers to a deployment in which all components—this can but does not necessarily include the database—are installed on the same virtual machine. External refers to a deployment in which vCenter Server is installed on one virtual machine and the Platform Services Controller (PSC) is installed on another. The Platform Services Controller is new to vCenter Server 6.0 and comprises VMware vCenter™ Single Sign-On™, licensing, and the VMware Certificate Authority (VMCA).

Embedded installations are recommended for standalone environments in which there is only one vCenter Server system and replication to another Platform Services Controller is not required. If there is a need to replicate with other Platform Services Controllers or there is more than one vCenter Single Sign-On enabled solution, deploying the Platform Services Controller(s) on separate virtual machine(s)—via external deployment—from vCenter Server is required.

This paper defines the services installed as part of each deployment model, recommended deployment models (reference architectures), installation and upgrade instructions for each reference architecture, postdeployment steps, and certificate management in VMware vSphere 6.0.

VMware vCenter Server 6.0 Services

vCenter Server and Platform Services Controller Services

Figure 1 – vCenter Server and Platform Services Controller Services

Requirements

General
A few requirements are common to both installing vCenter Server on Microsoft Windows and deploying VMware vCenter Server Appliance™. Ensure that all of these prerequisites are in place before proceeding with a new installation or an upgrade.

  • DNS – Ensure that resolution is working for all system names via fully qualified domain name (FQDN), short name (host name), and IP address (reverse lookup).
  • Time – Ensure that time is synchronized across the environment.
  • Passwords – vCenter Single Sign-On passwords must contain only ASCII characters; non-ASCII and extended (or high) ASCII characters are not supported.

Windows Installation

Installing vCenter Server 6.0 on a Windows Server requires a Windows 2008 SP2 or higher 64-bit operating system (OS). Two options are presented: Use the local system account or use a Windows domain account. With a Windows domain account, ensure that it is a member of the local computer’s administrator group and that it has been delegated the “Log on as a service” right and the “Act as part of the operating system” right. This option is not available when installing an external Platform Services Controller.

Windows installations can use either a supported external database or a local PostgreSQL database that is installed with vCenter Server and is limited to 20 hosts and 200 virtual machines. Supported external databases include Microsoft SQL Server 2008 R2, SQL Server 2012, SQL Server 2014, Oracle Database 11g, and Oracle Database 12c. When upgrading to vCenter Server 6.0, if SQL Server Express was used in the previous installation, it will be replaced with PostgreSQL. External databases require a 64-bit DSN. DSN aliases are not supported.

When upgrading vCenter Server to vCenter Server 6.0, only versions 5.0 and later are supported. If the vCenter Server system being upgraded is not version 5.0 or later, such an upgrade is required first.

Table 2 outlines minimum hardware requirements per deployment environment type and size when using an external database. If VMware vSphere Update Manager™ is installed on the same server, add 125GB of disk space and 4GB of RAM.

Minimum Hardware Requirements – Windows Installation

Table 2. Minimum Hardware Requirements – Windows Installation

Download

Download a full VMware vCenter Server™ 6.0 Deployment Guide

Rating: 5/5


May 22

Common vCenter Server Tasks in the vSphere Web Client Part 1

https://kb.vmware.com/kb/2145397

This video demonstrates how to perform the following common VMware vCenter Server tasks using the vSphere Web Client.

– Creating a Datacenter
– Creating a Cluster
– Adding an ESXi host
– Licensing an ESXi host

This vmware video tutorial is aimed at vSphere users who are new to the vSphere Web Client.

Rating: 5/5


May 18

Goodbye vSphere Client for Windows (C#) – Hello HTML5

Goodbye vSphere Client for Windows

Today we have two important announcements. First, the C# client (AKA Desktop Client/thick client/vSphere Client for Windows) will not be available for the next version of vSphere. Current versions of vSphere (6.0, 5.5) will not be affected, as those will follow the standard support period. You’ve heard this from us in the past, but we’ve been waiting for a sufficient replacement before finally moving forward. Second, we want to talk about the recent vSphere HTML5 Web Client Fling, user adoption, and VMware’s focus on bringing a great user experience. Like the Embedded Host Client Fling (which made it into vSphere in 6.0U2), we plan on bringing this product into a supported release soon.

Looking to the Future

VMware has been working towards the transition to HTML5 with the Platform Services Controller UI, vCenter Server Appliance Management UI, and the Host Client. All three of these were very well received and have become the official interfaces for their respective components. The last (and biggest) one to tackle was the management interface for vCenter Server.

vSphere Web Client has always been intended to be the replacement for the Desktop client, and many of our users have tried to embrace this during the vSphere 5.5 and vSphere 6.0 periods, spending their time working within the Web Client even with the Desktop client available.

While there were certainly issues with the 5.5 and 6.0 Web Client, many users that committed to the experience came to enjoy many of the new features and usability improvements. We also continued to listen to our customers, making further efforts to improve the Web Client experience have been made across 5.5U3, 6.0U1 and 6.0U2, including VUM (vSphere Update Manager) in 6.0U1 Web Client. We have made the Desktop client available during this period, which was much longer than originally planned. But now that time is ending.

Additionally, due to the shift in backend services going from vSphere 6.0 to the next version, updating the Desktop client would have required a huge investment. This may have been okay in a vacuum, but the required resources would have severely impacted the progress of the new vSphere Client, only to end up with four clients for users to juggle. We decided to focus on bringing the new vSphere Client (HTML5 based) up to speed as fast as possible, simultaneously offering a great user experience and getting off of Flash.

We’ll be referring to the new client as the vSphere Client, as it better describes the product, and isn’t a ten syllable mouthful (vSphere HTML5 Web Client).

The new vSphere Client (HTML5)

Try it here: The new vSphere Client (HTML5)

This decision is about VMware trying to provide the best user experience: a fast, reliable, scalable modern interface that allows you to get your work done is our primary goal. The new vSphere Client is the best way to achieve that goal. Many have already tried out the Fling ( https://labs.vmware.com/flings/vsphere-html5-web-client), with approximately 40% of survey respondents deploying it into Production and using it daily to manage their critical environments. With this Fling, we’ll keep the user experience mostly the same as the Web Client, which we’ve improved, based on your feedback. We also plan on making additional improvements to make it easier for C# users to transition.

One benefit of the Fling delivery model is very fast turnaround. We’ve been able to release a new version of the Fling every week, with new features, bug fixes, and performance improvements. More importantly, we’ve been able to quickly incorporate user feedback into the product. Sometimes this means simple bug fixes, sometimes this means changing our priorities to better address user needs. While this pace and model of delivery may not be used for the fully supported releases, due to testing time required, we likely will continue to use the Fling releases to stay on track with users. A fundamental part of this high touch engagement model is users staying as up-to-date as possible, and most of our Fling users are doing just that, so thank you!

Plugins

We also recognize how important plugins are, and the transition from Web Client to vSphere Client will take second and third-party plugins into account. We’ve already started engaging with plugin developers of all sorts to get them moving to the HTML bridge, which will allow the creation of a single plugin that is forward and backward compatible with both the vSphere Client and the Web Client, creating a smooth transition path. If you require more information on plugin migration, please contact us. One great source of information is this site which contains a lot of future looking information about vCenter. This site will be updated as more information becomes available, so keep an eye on it: http://www.vmware.com/products/vcenter-server/future-overview/overview.html

We do expect the plugin transition to take some time, and this means that we expect to ship the Flex based Web Client and the HTML5 based vSphere Client side by side for some uncertain period. Everyone is very eager to have the new vSphere Client as the only client, but we want to respect the porting development time our partners require.

Seeking your Feedback

Hopefully these announcements come as a shock to no one – they are simply a reiteration of the message VMware has given for years. We are continually working to make vSphere Client a fast, reliable, and scalable product that provides a great overall experience. If you have any comments, please post them below. We’d like to hear feedback from all points of view, as we look to the future instead of the past.

Dennis Lu
Product Manager, vSphere Clients

Rating: 5/5


May 15

VMware vSphere® Distributed Switch Best Practices

Introduction

This paper provides best practice guidelines for deploying the VMware vSphere® distributed switch (VDS) in a vSphere environment. The advanced capabilities of VDS provide network administrators with more control of and visibility into their virtual network infrastructure. This document covers the di!erent considerations that vSphere and network administrators must take into account when designing the network with VDS. It also discusses some standard best practices for configuring VDS features.

The paper describes two example deployments, one using rack servers and the other using blade servers. For each of these deployments, di!erent VDS design approaches are explained. The deployments and design
approaches described in this document are meant to provide guidance as to what physical and virtual switch parameters, options and features should be considered during the design of a virtual network infrastructure. It is important to note that customers are not limited to the design options described in this paper. The flexibility of the vSphere platform allows for multiple variations in the design options that can fulfill an individual customer’s unique network infrastructure needs.

This document is intended for vSphere and network administrators interested in understanding and deploying VDS in a virtual datacenter environment. With the release of vSphere 5, there are new features as well as enhancements to the existing features in VDS. To learn more about these new features and enhancements, refer to the What’s New in Networking Technical White Paper.

Readers are also encouraged to refer to basic virtual and physical networking concepts before reading through this document.

For physical networking concepts, readers should refer to any physical network switch vendor’s documentation.

Design Considerations

The following three main aspects influence the design of a virtual network infrastructure:
1) Customer’s infrastructure design goals
2) Customer’s infrastructure component configurations
3) Virtual infrastructure traffic requirements

Let’s take a look at each of these aspects in a little more detail.

Infrastructure Design Goals

Customers want their network infrastructure to be available 24/7, to be secure from any attacks, to perform efficiently throughout day-to-day operations, and to be easy to maintain. In the case of a virtualized environment, these requirements become increasingly demanding as growing numbers of business-critical applications run in a consolidated setting. These requirements on the infrastructure translate into design decisions that should incorporate the following best practices for a virtual network infrastructure:

  • Avoid any single point of failure in the network
  • Isolate each traffic type for increase resiliency and security
  • Make use of traffic management and optimization capabilities.

Infrastructure Component Configurations

In every customer environment, the utilized compute and network infrastructures di!er in terms of
configuration, capacity and feature capabilities. These di!erent infrastructure component configurations
influence the virtual network infrastructure design decisions. The following are some of the configurations and features that administrators must look out for:

  • Server configuration: rack or blade servers
  • Network adapter configuration: 1GbE or 10GbE network adapters; number of available adapters; offload function on these adapters, if any
  • Physical network switch infrastructure capabilities: switch clustering

It is impossible to cover all the different virtual network infrastructure design deployments based on the various combinations of type of servers, network adapters and network switch capability parameters. In this paper, the following four commonly used deployments that are based on standard rack server and blade server configurations are described:

  • Rack server with eight 1GbE network adapters
  • Rack server with two 10GbE network adapters
  • Blade server with two 10GbE network adapters
  • Blade server with hardware-assisted multiple logical Ethernet network adapters

It is assumed that the network switch infrastructure has standard layer 2 switch features (high availability, redundant paths, fast convergence, port security) available to provide reliable, secure and scalable connectivity to the server infrastructure.

Virtual Infrastructure Traffic

vSphere virtual network infrastructure carries different traffic types. To manage the virtual infrastructure traffic effectively, vSphere and network administrators must understand the different traffic types and their characteristics. The following are the key traffic types that flow in the vSphere infrastructure, along with their traffic characteristics:

Management traffic: This traffic flows through a vmknic and carries VMware ESXi host-to-VMware vCenter configuration and management communication as well as ESXi host-to-ESXi host high availability (HA) – related communication. This traffic has low network utilization but has very high availability and security requirements.

VMware vSphere vMotion traffic: With advancement in vMotion technology, a single vMotion instance can consume almost a full 10Gb bandwith, A maximum of eight simultaneous vMotion instances can be performed on a 10Gb uplink; four simultaneous vMotion instances are allowed on a 1 Gb uplink. vMotion traffic has very high network utilization and can be bursty at times. Customers must make sure that vMotion traffic doesn’t impact other traffic types, because it might consume all available I/O resources. Another property of vMotion traffic is that it is not sensitive to throttling and makes a very good candidate on which to perform traffic management.

Fault-tolerent traffic: When VMware Fault Tolerance (FT) logging is enabled for a virtual machine, all the logging traffic is sent to the secondary fault-tolerent virtual machine over a designated vmknic port, This process can require a considerable amount of bandwith at low latency because it replicate the I/O traffic and memory-state information to the secondary virtual machine.

ISCSI/NFS traffic: IP storage traffic is carried over vmknic ports. This traffic varies according to disk I/O requests. With end-to-end jumbo frame configuration, more data is transferred with each Ethernet frame, decreasing the number of frame on the network. This larger frame reduces the overhead on servers/targets and improves the IP storage performance. On the other hand, congested and lower-speed networks can cause latency issues that disrupt access to IP storage. It is recommended that users provide a high-speed path for IP storage and avoid any congestion in the network infrastructure.

Virtual machine traffic: Depending on the workloads that are running on the guest virtual machine, the traffic patterns will vary from low to high network utilization. Some of the applications running in virtual machines might be latency sensitive as is the case with VOIP workloads.

Table 1 summarize the characteristics of each traffic type.

vCenter Server median latency at several inventory sizes

Table 1. Traffic Types and Characteristics.

To understand the different traffic flows in the physical network infrastructure, network administrators use network traffic management tools. These tools help monitor the physical infrastructure traffic but do not providevisibility into virtual infrastructure traffic. With the release of vSphere 5, VDS now supports the NetFlow feature, which enables exporting the internal (virtual machine-to-virtual machine) virtual infrastructure flow information to standard network management tools. Administrators now have the required visibility into virtual infrastructure traffic. This helps administrators monitor the virtual network infrastructure traffic through a familiar set of network management tools. Customers should make use of the network data collected from these tools during the capacity planning or network design exercises.

Example Deployment Components

After looking at the different design consideration, this section provides a list of components that are used in an example deployment. This example deployment helps illustrate some standard VDS design approaches.
The following are some common components in the virtual infrastructure. The list doesn’t include storage
components that are required to build the virtual infrastructure. It is assumed that customers will deploy IP storage in this example deployment.

Hosts

Four ESXi provide compute, memory and network resources according to the configuration of the hardware. Customers can have different numbers of hosts in their environment, based on their needs. One VDS can span across 350 hosts. This capability to support large numbers of hosts provides the required scalability to build a private or public cloud environment using VDS.

Clusters

A cluster is a collection of ESXi hosts and associated virtual machines with shared resources. Customers can have as many clusters in their deployment as are required. With one VDS spanning across 350 hosts, customers have the flexibility of deploying multiple clusters with a different number of hosts in each cluster. For simple illustration purposes, two clusters with two hosts each are considered in this example deployment. One cluster can have a maximum of 32 hosts.

VMware vCenter Server

VMware vCenter Server centrally manages a vSphere environment. Customers can manage VDS through this centralized management tool, which can be deployed on a virtual machine or a physical host. The vCenter Server system is not shown in the diagrams, but customers should assume that it is present in this example deployment. It is used only to provision and manage VDS configuration. When provisioned, hosts and virtual machine networks operate independently of vCenter Server. All components required for network switching reside on ESXi hosts. Even if the vCenter Server system fails, the hosts and virtual machines will still be abler to communicate.

Network Infrastructure

Physical network switches in the access and aggregation layer provide connectivity between ESXi hosts and to the external world. These network infrastructure components support standard layer 2 protocols providing secure and reliable connectivity.
Along with the preceding four components of the physical infrastructure in this example deployment, some of the virtual infrastructure traffic types are also considered during the design. The following section describes the different traffic types in the example deployment.

Virtual Infrastructure Traffic Types

In this example deployment, there are standard infrastructure traffic types, including iSCSI, vMotion, FT, management and virtual machine. Customers might have other traffic types in their environment, based on their choice of storage infrastructure (FC, NFS, FCoE). Figure 1 shows the different traffic types along with associated port groups on an ESXi host. It also shows the mapping of the network adapters to the different port groups.

vCenter Server median latency at several inventory sizes

Table 1. Different Traffic Types Running on a Host.

Download

Download a full VMware vSphere® Distributed Switch Best Practices Technical White Paper

Rating: 5/5


May 15

What’s New in VMware vSphere™ 5.0 Networking

Introduction

With the release of VMware vSphere™ 5.0 (“vSphere”), VMware brings a number of powerful new features and enhancements to the networking capabilities of the vSphere platform. These new network capabilities enable customers to run business-critical applications with confidence and provide the flexibility to enable customers to respond to business needs more rapidly. All the networking capabilities discussed in this document are available only with the VMware vSphere Distributed Switch (Distributed Switch).

There are two broad types of networking capabilities that are new or enhanced in the VMware vSphere 5.0
release. The first type improves the network administrator’s ability to monitor and troubleshoot virtual
infrastructure traffic by introducing features such as:

  • NetFlow
  • Port mirror

The second type focuses on enhancements to the network I/O control (NIOC) capability first released in
vSphere 4.1. These NIOC enhancements target the management of I/O resources in consolidated I/O
environments with 10GB network interface cards. The enhancements to NIOC enable customers to provide
end-to-end quality of service (QoS) through allocating I/O shares for user-defined traffic types as well as tagging packets for prioritization by external network infrastructure. The following are the key NIOC
enhancements:

  • User-defned resource pool
  • vSphere replication trafc type
  • IEEE 802.1p tagging

The following sections will provide higher-level details on new and enhanced networking capabilities in vSphere 5.0.

Network Monitoring and Troubleshooting

In a vSphere 5.0 environment, virtual network switches provide connectivity for virtual machines running on VMware® ESXi™ hosts to communicate with each other as well as connectivity to the external physical
infrastructure. Network administrators want more visibility into this traffic that is flowing in the virtual infrastructure. This visibility will help them monitor and troubleshoot network issues. VMware vSphere 5.0 introduces two new features in the Distributed Switch that provide the required monitoring and troubleshooting capability to the virtual infrastructure.

NetFlow

NetFlow is a networking protocol that collects IP traffic information as records and sends them to a collector such as CA NetQoS for traffic flow analysis. VMware vSphere 5.0 supports NetFlow v5, which is the most common version supported by network devices. NetFlow capability in the vSphere 5.0 platform provides visibility into virtual infrastructure traffic that includes:

  • Intrahost virtual machine traffic (virtual machine–to–virtual machine traffic on the same host)
  • Interhost virtual machine traffic (virtual machine–to–virtual machine traffic on different hosts)
  • Virtual machine–physical infrastructure traffic

Figure 1 shows a Distributed Switch configured to send NetFlow records to a collector that is connected to an external network switch. The blue dotted line with arrow indicates the NetFlow session that is established to send flow records for the collector to analyze.

NetFlow Traffic

Figure 1. NetFlow Traffic

Usage

NetFlow capability on a Distributed Switch along with a NetFlow collector tool helps monitor application flows and measures flow performance over time. It also helps in capacity planning and ensuring that I/O resources are utilized properly by different applications, based on their needs.

IT administrators who want to monitor the performance of application flows running in the virtualized
environment can enable flow monitoring on a Distributed Switch.

Configuration

NetFlow on Distributed Switches can be enabled at the port group level, at an individual port level or at the uplink level. When configuring NetFlow at the port level, administrators should select the NetFlow override tab, which will make sure that flows are monitored even if the port group–level NetFlow is disabled.

Port Mirror

Port mirroring is the capability on a network switch to send a copy of network packets seen on a switch port to a network monitoring device connected to another switch port. Port mirroring is also referred to as Switch Port Analyzer (SPAN) on Cisco switches. In VMware vSphere 5.0, a Distributed Switch provides a similar port mirroring capability to that available on a physical network switch. After a port mirror session is configured with a destination—a virtual machine, a vmknic or an uplink port—the Distributed Switch copies packets to the destination. Port mirroring provides visibility into:

  • Intrahost virtual machine traffic (virtual machine–to–virtual machine traffic on the same host)
  • Interhost virtual machine traffic (virtual machine–to–virtual machine traffic on different hosts)

Figure 2 shows different types of traffic flows that can be monitored when a virtual machine on a host acts as a destination or monitoring device. All traffic shown by the orange dotted line with arrow is mirrored traffic that is sent to the destination virtual machine.

NetFlow Traffic

Figure 2. Port Mirror Traffic Flows When Destination Where Packets Are Mirrored Is a Virtual Machine

Usage

The port mirroring capability on a Distributed Switch is a valuable tool that helps network administrators in debugging network issues in a virtual infrastructure. The granular control over monitoring ingress, egress or all trafc of a port helps administrators fne-tune what trafc is sent for analysis.

Configuration

Port mirror configuration can be done at the Distributed Switch level, where a network administrator can create a port mirror session by identifying the traffic source that needs monitoring and the traffic destination where the traffic will be mirrored. The traffic source can be any port with ingress, egress or all traffic selected. The traffic destination can be any virtual machine, vmknic or uplink port.

Download

Download a full What’s New in VMware vSphere™ 5.0 Networking Technical White Paper.

Rating: 5/5