Jun 11

Oracle Databases on VMware Best Practices Guide

Introduction

This Oracle Databases on VMware Best Practices Guide provides best practice guidelines for deploying Oracle databases on VMware vSphere®. The recommendations in this guide are not specific to any particular set of hardware, or size and scope of any particular Oracle database implementation. The examples and considerations provide guidance, but do not represent strict design requirements.

The successful deployment of Oracle on vSphere 5.x/6.0 is not significantly different from deploying Oracle on physical servers. DBAs can fully leverage their current skill set while also delivering the benefits associated with virtualization.

In addition to this guide, VMware has created separate best practice documents for storage, networking, and performance.

This document also includes information from two white papers, Performance Best Practice for VMware vSphere 5.5 and Performance Best Practices for VMware vSphere 6.0

VMware Support for Oracle Databases on vSphere

Oracle has a support statement for VMware products (MyOracleSupport 249212.1). While there has been much public discussion about Oracle’s perceived position on support for VMware virtualization, experience shows that Oracle Support upholds its commitment to customers, including those using VMware virtualization in conjunction with Oracle products.

VMware is also an Oracle customer. The E-Business Suite and Siebel implementations of VMware IT are virtualized. VMware routinely submits and receives assistance with issues for Oracle running on VMware virtual infrastructure. The MyOracleSupport (MetaLink) Document ID 249212.1 provides the specifics of Oracle’s support commitment to VMware. Gartner, IDC, and others also have documents available to their subscribers that specifically address this policy.

VMware Oracle Support Process

VMware support will accept tickets for any Oracle-related issue reported by a customer and will help drive the issue to resolution. To augment Oracle’s support document, VMware also has a total ownership policy for customers with Oracle issues as described in the letter at VMware® Oracle Support Affirmation.

By being accountable, VMware Support will drive the issue to resolution regardless of which vendor (VMware, Oracle or other) is responsible for the resolution. In most cases, reported issues can be resolved through configuration changes, bug fixes, or feature enhancements by one of the involved vendors. VMware is committed to its customer’s success and supports their choice to run Oracle software in modern, virtualized environments. For further information, see https://www.vmware.com/support/policies/oracle-support

VMware vSphere Oracle Support Process

Figure 1 – VMware vSphere Oracle Support Process

Download

Download a full Oracle Databases on VMware Best Practices Guide

Rating: 5/5


Jun 10

VMware vCenter Server™ 6.0 Deployment Guide

Introduction

The VMware vCenter Server™ 6.0 release introduces new, simplified deployment models. The components that make up a vCenter Server installation have been grouped into two types: embedded and external. Embedded refers to a deployment in which all components—this can but does not necessarily include the database—are installed on the same virtual machine. External refers to a deployment in which vCenter Server is installed on one virtual machine and the Platform Services Controller (PSC) is installed on another. The Platform Services Controller is new to vCenter Server 6.0 and comprises VMware vCenter™ Single Sign-On™, licensing, and the VMware Certificate Authority (VMCA).

Embedded installations are recommended for standalone environments in which there is only one vCenter Server system and replication to another Platform Services Controller is not required. If there is a need to replicate with other Platform Services Controllers or there is more than one vCenter Single Sign-On enabled solution, deploying the Platform Services Controller(s) on separate virtual machine(s)—via external deployment—from vCenter Server is required.

This paper defines the services installed as part of each deployment model, recommended deployment models (reference architectures), installation and upgrade instructions for each reference architecture, postdeployment steps, and certificate management in VMware vSphere 6.0.

VMware vCenter Server 6.0 Services

vCenter Server and Platform Services Controller Services

Figure 1 – vCenter Server and Platform Services Controller Services

Requirements

General
A few requirements are common to both installing vCenter Server on Microsoft Windows and deploying VMware vCenter Server Appliance™. Ensure that all of these prerequisites are in place before proceeding with a new installation or an upgrade.

  • DNS – Ensure that resolution is working for all system names via fully qualified domain name (FQDN), short name (host name), and IP address (reverse lookup).
  • Time – Ensure that time is synchronized across the environment.
  • Passwords – vCenter Single Sign-On passwords must contain only ASCII characters; non-ASCII and extended (or high) ASCII characters are not supported.

Windows Installation

Installing vCenter Server 6.0 on a Windows Server requires a Windows 2008 SP2 or higher 64-bit operating system (OS). Two options are presented: Use the local system account or use a Windows domain account. With a Windows domain account, ensure that it is a member of the local computer’s administrator group and that it has been delegated the “Log on as a service” right and the “Act as part of the operating system” right. This option is not available when installing an external Platform Services Controller.

Windows installations can use either a supported external database or a local PostgreSQL database that is installed with vCenter Server and is limited to 20 hosts and 200 virtual machines. Supported external databases include Microsoft SQL Server 2008 R2, SQL Server 2012, SQL Server 2014, Oracle Database 11g, and Oracle Database 12c. When upgrading to vCenter Server 6.0, if SQL Server Express was used in the previous installation, it will be replaced with PostgreSQL. External databases require a 64-bit DSN. DSN aliases are not supported.

When upgrading vCenter Server to vCenter Server 6.0, only versions 5.0 and later are supported. If the vCenter Server system being upgraded is not version 5.0 or later, such an upgrade is required first.

Table 2 outlines minimum hardware requirements per deployment environment type and size when using an external database. If VMware vSphere Update Manager™ is installed on the same server, add 125GB of disk space and 4GB of RAM.

Minimum Hardware Requirements – Windows Installation

Table 2. Minimum Hardware Requirements – Windows Installation

Download

Download a full VMware vCenter Server™ 6.0 Deployment Guide

Rating: 5/5


Jun 07

VMware® NSX for vSphere Network Virtualization Design Guide ver 3.0

Created by RobertoMari on Aug 21, 2014 5:52 PM. Last modified by nikhilvmw on Dec 22, 2015 9:03 AM

Intended Audience

This document is targeted toward virtualization and network architects interested in deploying VMware® NSX network virtualization solution in a vSphere environment.

This is a updated edition of the VMware® NSX for vSphere Network Virtualization Design Guide
Authors:VMware NSX Technical Product Management Team

Overview

IT organizations have gained significant benefits as a direct result of server virtualization. Tangible advantages of server consolidation include reduced physical complexity, increased operational efficiency, and simplified dynamic repurposing of underlying resources. These technology solutions have delivered on their promise of helping IT to quickly and optimally meet the needs of increasingly dynamic business applications.

VMware’s Software Defined Data Center (SDDC) architecture moves beyond the server, extending virtualization technologies across the entire physical data center infrastructure. VMware NSX, the network virtualization platform, is a key product in the SDDC architecture. With VMware NSX, virtualization now delivers for networking what it has already delivered for compute. Traditional server virtualization programmatically creates, snapshots, deletes, and restores virtual machines (VMs); similarly, network virtualization with VMware NSX programmatically creates, snapshots, deletes, and restores software-based virtual networks. The result is a completely transformative approach to networking, enabling orders of magnitude better agility and economics while also vastly simplifying the operational model for the underlying physical network.

NSX is a completely non-disruptive solution which can be deployed on any IP network from any vendor – both existing traditional networking models and next generation fabric architectures. The physical network infrastructure already in place is all that is required to deploy a software-defined data center with NSX.

This document is targeted toward virtualization and network architects interested in deploying VMware® NSX Network virtualization solution in a vSphere environment.

Stack diagram for VMware Integrated OpenStack

Figure 1 – Server and Network Virtualization Antology

Figure 1 draws an analogy between compute and network virtualization. With server virtualization, a software abstraction layer (i.e., server hypervisor) reproduces the familiar attributes of an x86 physical server (e.g., CPU, RAM, Disk, NIC) in software. This allows components to be programmatically 5 assembled in any arbitrary combination to produce a unique VM in a matter of seconds.

With network virtualization, the functional equivalent of a “network hypervisor” reproduces layer 2 to layer 7 networking services (e.g., switching, routing, firewalling, and load balancing) in software. These services can then be programmatically assembled in any arbitrary combination, producing unique, isolated virtual networks in a matter of seconds.

Network Virtualization Abstraction Layer and Underlying Infrastructure

Figure 2 – Network Virtualization Abstraction Layer and Underlying Infrastructure

Where VMs are independent of the underlying x86 platform and allow IT to treatphysical hosts as a pool of compute capacity, virtual networks are independent of the underlying IP network hardware. IT can thus treat the physical network as a pool of transport capacity that can be consumed and repurposed on demand.
This abstraction is illustrated in Figure 2. Unlike legacy architectures, virtual networks can be provisioned, changed, stored, deleted, and restored programmatically without reconfiguring the underlying physical hardware or topology. By matching the capabilities and benefits derived from familiar server and storage virtualization solutions, this transformative approach to networking unleashes the full potential of the software-defined data center.

With VMware NSX, existing networks are immediately ready to deploy a nextgeneration software defined data center. This paper will highlight the range of functionality provided by the VMware NSX for vSphere architecture, exploring design factors to consider to fully leverage and optimize existing network investments.

NSX Primary Use Cases

Customers are using NSX to drive business benefits as show in the figure below.
The main themes for NSX deployments are Security, IT automation and Application Continuity.

NSX Use Cases

Figure 3 – NSX Use Cases

Security:

  • NSX can be used to create a secure infrastructure, which can create a zero-trust security model. Every virtualized workload can be protected with a full stateful firewall engine at a very granular level. Security can be based on constructs such as MAC, IP, ports, vCenter objects and tags, active directory groups, etc. Intelligent dynamic security grouping can drive the security posture within the infrastructure.

    NSX can be used in conjunction with 3rd party security vendors such as Palo Alto Networks, Checkpoint, Fortinet, or McAffee to provide a complete DMZ like security solution within a cloud infrastructure.

    NSX has been deployed widely to secure virtual desktops to secure some of the most vulnerable workloads, which reside in the data center to prohibit desktop-to-desktop hacking.

Automation:

  • VMware NSX provides a full RESTful API to consume networking, security and services, which can be used to drive automation within the infrastructure. IT admins can reduce the tasks and cycles required to provision workloads within the datacenter using NSX.

    NSX is integrated out of the box with automation tools such as vRealize automation, which can provide customers with a one-click deployment option for an entire application, which includes the compute, storage, network, security and L4-L7 services.

    Developers can use NSX with the OpenStack platform. NSX provides a neutron plugin that can be used to deploy applications and topologies via OpenStack.

Application Continuity:

  • NSX provides a way to easily extend networking and security up to eight vCenters either within or across data center In conjunction with vSphere 6.0 customers can easily vMotion a virtual machine across long distances and NSX will ensure that the network is consistent across the sites and ensure that the firewall rules are consistent. This essentially maintains the same view across sites.

    NSX Cross vCenter Networking can help build active – active data centers. Customers are using NSX today with VMware Site Recovery Manager to provide disaster recovery solutions. NSX can extend the network across data centers and even to the cloud to enable seamless networking and security.

The use cases outlined above are a key reason why customers are investing in NSX. NSX is uniquely positioned to solve these challenges as it can bring networking and security closest to the workload itself and carry the policies along with the workload.

Overview of NSX Network Virtualization Solution

An NSX deployment consists of a data plane, control plane, and management plane, as shown in Figure 4.

NSX-components.jpg

Figure 4 – NSX Components


The NSX architecture has built in separation of data, control, and management layers. The NSX components that maps to each layer and each layer’s architectural properties are shown in above Figure 4. This separation allows the architecture to grow and scale without impacting workload.

In this version 3.0 edition the guide was updated to provide new additional context around:

1. Sizing for small and medium data centers with NSX
2. Routing best practices
3. Micro-segmentation and service composer design guidance

Thanks to all the contributors and reviewers to various sections of the document.
A final version of this Reference Guide will be posted soon on our NSX Technical Resources website (link below): http://www.vmware.com/products/nsx/resources.html

Download

Download a full NSX Reference Design Version 3.0 Guide

Rating: 5/5


May 15

VMware vSphere® Distributed Switch Best Practices

Introduction

This paper provides best practice guidelines for deploying the VMware vSphere® distributed switch (VDS) in a vSphere environment. The advanced capabilities of VDS provide network administrators with more control of and visibility into their virtual network infrastructure. This document covers the di!erent considerations that vSphere and network administrators must take into account when designing the network with VDS. It also discusses some standard best practices for configuring VDS features.

The paper describes two example deployments, one using rack servers and the other using blade servers. For each of these deployments, di!erent VDS design approaches are explained. The deployments and design
approaches described in this document are meant to provide guidance as to what physical and virtual switch parameters, options and features should be considered during the design of a virtual network infrastructure. It is important to note that customers are not limited to the design options described in this paper. The flexibility of the vSphere platform allows for multiple variations in the design options that can fulfill an individual customer’s unique network infrastructure needs.

This document is intended for vSphere and network administrators interested in understanding and deploying VDS in a virtual datacenter environment. With the release of vSphere 5, there are new features as well as enhancements to the existing features in VDS. To learn more about these new features and enhancements, refer to the What’s New in Networking Technical White Paper.

Readers are also encouraged to refer to basic virtual and physical networking concepts before reading through this document.

For physical networking concepts, readers should refer to any physical network switch vendor’s documentation.

Design Considerations

The following three main aspects influence the design of a virtual network infrastructure:
1) Customer’s infrastructure design goals
2) Customer’s infrastructure component configurations
3) Virtual infrastructure traffic requirements

Let’s take a look at each of these aspects in a little more detail.

Infrastructure Design Goals

Customers want their network infrastructure to be available 24/7, to be secure from any attacks, to perform efficiently throughout day-to-day operations, and to be easy to maintain. In the case of a virtualized environment, these requirements become increasingly demanding as growing numbers of business-critical applications run in a consolidated setting. These requirements on the infrastructure translate into design decisions that should incorporate the following best practices for a virtual network infrastructure:

  • Avoid any single point of failure in the network
  • Isolate each traffic type for increase resiliency and security
  • Make use of traffic management and optimization capabilities.

Infrastructure Component Configurations

In every customer environment, the utilized compute and network infrastructures di!er in terms of
configuration, capacity and feature capabilities. These di!erent infrastructure component configurations
influence the virtual network infrastructure design decisions. The following are some of the configurations and features that administrators must look out for:

  • Server configuration: rack or blade servers
  • Network adapter configuration: 1GbE or 10GbE network adapters; number of available adapters; offload function on these adapters, if any
  • Physical network switch infrastructure capabilities: switch clustering

It is impossible to cover all the different virtual network infrastructure design deployments based on the various combinations of type of servers, network adapters and network switch capability parameters. In this paper, the following four commonly used deployments that are based on standard rack server and blade server configurations are described:

  • Rack server with eight 1GbE network adapters
  • Rack server with two 10GbE network adapters
  • Blade server with two 10GbE network adapters
  • Blade server with hardware-assisted multiple logical Ethernet network adapters

It is assumed that the network switch infrastructure has standard layer 2 switch features (high availability, redundant paths, fast convergence, port security) available to provide reliable, secure and scalable connectivity to the server infrastructure.

Virtual Infrastructure Traffic

vSphere virtual network infrastructure carries different traffic types. To manage the virtual infrastructure traffic effectively, vSphere and network administrators must understand the different traffic types and their characteristics. The following are the key traffic types that flow in the vSphere infrastructure, along with their traffic characteristics:

Management traffic: This traffic flows through a vmknic and carries VMware ESXi host-to-VMware vCenter configuration and management communication as well as ESXi host-to-ESXi host high availability (HA) – related communication. This traffic has low network utilization but has very high availability and security requirements.

VMware vSphere vMotion traffic: With advancement in vMotion technology, a single vMotion instance can consume almost a full 10Gb bandwith, A maximum of eight simultaneous vMotion instances can be performed on a 10Gb uplink; four simultaneous vMotion instances are allowed on a 1 Gb uplink. vMotion traffic has very high network utilization and can be bursty at times. Customers must make sure that vMotion traffic doesn’t impact other traffic types, because it might consume all available I/O resources. Another property of vMotion traffic is that it is not sensitive to throttling and makes a very good candidate on which to perform traffic management.

Fault-tolerent traffic: When VMware Fault Tolerance (FT) logging is enabled for a virtual machine, all the logging traffic is sent to the secondary fault-tolerent virtual machine over a designated vmknic port, This process can require a considerable amount of bandwith at low latency because it replicate the I/O traffic and memory-state information to the secondary virtual machine.

ISCSI/NFS traffic: IP storage traffic is carried over vmknic ports. This traffic varies according to disk I/O requests. With end-to-end jumbo frame configuration, more data is transferred with each Ethernet frame, decreasing the number of frame on the network. This larger frame reduces the overhead on servers/targets and improves the IP storage performance. On the other hand, congested and lower-speed networks can cause latency issues that disrupt access to IP storage. It is recommended that users provide a high-speed path for IP storage and avoid any congestion in the network infrastructure.

Virtual machine traffic: Depending on the workloads that are running on the guest virtual machine, the traffic patterns will vary from low to high network utilization. Some of the applications running in virtual machines might be latency sensitive as is the case with VOIP workloads.

Table 1 summarize the characteristics of each traffic type.

vCenter Server median latency at several inventory sizes

Table 1. Traffic Types and Characteristics.

To understand the different traffic flows in the physical network infrastructure, network administrators use network traffic management tools. These tools help monitor the physical infrastructure traffic but do not providevisibility into virtual infrastructure traffic. With the release of vSphere 5, VDS now supports the NetFlow feature, which enables exporting the internal (virtual machine-to-virtual machine) virtual infrastructure flow information to standard network management tools. Administrators now have the required visibility into virtual infrastructure traffic. This helps administrators monitor the virtual network infrastructure traffic through a familiar set of network management tools. Customers should make use of the network data collected from these tools during the capacity planning or network design exercises.

Example Deployment Components

After looking at the different design consideration, this section provides a list of components that are used in an example deployment. This example deployment helps illustrate some standard VDS design approaches.
The following are some common components in the virtual infrastructure. The list doesn’t include storage
components that are required to build the virtual infrastructure. It is assumed that customers will deploy IP storage in this example deployment.

Hosts

Four ESXi provide compute, memory and network resources according to the configuration of the hardware. Customers can have different numbers of hosts in their environment, based on their needs. One VDS can span across 350 hosts. This capability to support large numbers of hosts provides the required scalability to build a private or public cloud environment using VDS.

Clusters

A cluster is a collection of ESXi hosts and associated virtual machines with shared resources. Customers can have as many clusters in their deployment as are required. With one VDS spanning across 350 hosts, customers have the flexibility of deploying multiple clusters with a different number of hosts in each cluster. For simple illustration purposes, two clusters with two hosts each are considered in this example deployment. One cluster can have a maximum of 32 hosts.

VMware vCenter Server

VMware vCenter Server centrally manages a vSphere environment. Customers can manage VDS through this centralized management tool, which can be deployed on a virtual machine or a physical host. The vCenter Server system is not shown in the diagrams, but customers should assume that it is present in this example deployment. It is used only to provision and manage VDS configuration. When provisioned, hosts and virtual machine networks operate independently of vCenter Server. All components required for network switching reside on ESXi hosts. Even if the vCenter Server system fails, the hosts and virtual machines will still be abler to communicate.

Network Infrastructure

Physical network switches in the access and aggregation layer provide connectivity between ESXi hosts and to the external world. These network infrastructure components support standard layer 2 protocols providing secure and reliable connectivity.
Along with the preceding four components of the physical infrastructure in this example deployment, some of the virtual infrastructure traffic types are also considered during the design. The following section describes the different traffic types in the example deployment.

Virtual Infrastructure Traffic Types

In this example deployment, there are standard infrastructure traffic types, including iSCSI, vMotion, FT, management and virtual machine. Customers might have other traffic types in their environment, based on their choice of storage infrastructure (FC, NFS, FCoE). Figure 1 shows the different traffic types along with associated port groups on an ESXi host. It also shows the mapping of the network adapters to the different port groups.

vCenter Server median latency at several inventory sizes

Table 1. Different Traffic Types Running on a Host.

Download

Download a full VMware vSphere® Distributed Switch Best Practices Technical White Paper

Rating: 5/5


Apr 04

USB Device Redirection, Configuration, and Usage in View Virtual Desktops

Introduction

In the 5.1 release of View, VMware introduced some complex configuration options for the usage and management of USB devices in a View virtual desktop session. This white paper gives a high-level overview of USB remoting, discusses the configuration options, and provides some practical worked examples to illustrate how these options can be used.

USB Redirection Overview

We are all familiar with using USB devices on laptop or desktop machines. If you are working in a virtual desktop infrastructure (VDI) environment such as View, you may want to use your USB devices in the virtual desktop, too. USB device redirection is a function in View that allows USB devices to be connected to the virtual desktop as if they had been physically plugged into it. Typically, the user selects a device from the VMware Horizon Client menu and selects it to be forwarded to the virtual desktop. After a few moments, the device appears in the guest virtual machine, ready for use.

NPMD data diagram

Figure 1. USB Redirection

Definitions of Terms

In this paper, various terms are used to describe the components involved in USB redirection. The following are some brief definitions of terms:

  • USB redirection – Forwarding of the functions of a USB device from the physical endpoint to the View virtual machine.
  • Client computer, or client, or client machine – Physical endpoint displaying the virtual desktop with which the user interfaces, and where the USB device is physically plugged in.
  • Virtual desktop or guest virtual machine – The Windows desktop stored in the data center that is displayed remotely on the endpoint. This virtual desktop runs a Windows guest operating system, and has the View Agent installed on it.
  • Soft client – Horizon Client in software format, such as a Horizon Client for Windows or Linux. The soft client is installed on a hardware endpoint, such as a laptop, and displays the virtual desktop on the endpoint.
  • Zero client – A hardware-based client used to connect to a View desktop. Stateless device containing no operating system. Delivers the client login interface for View.
  • Thin client – A hardware device similar to a zero client, but with an OS installed. The Horizon Client is installed onto the OS of the thin client. Both devices generally lack local user-accessible storage and simply connect to the virtual desktop in the data center.
  • USB interface – A function within a USB device, such as mouse or keyboard or audio. Some USB devices have multiple functions and are called composite (USB) devices.
  • Composite (USB) device – A USB device with multiple functions, or interfaces.
  • HID – Human interface device. A device with which the user physically interacts, such as mice, keyboards, and joysticks.
  • VID – The vendor identification, or code, for a USB device, which identifies the vendor that produced the device.
  • PID – The product identification, or code, which, combined with the VID, uniquely identifies a USB device within a vendor’s family of USB products. The VID and PID are used within View USB configuration settings to identify the specific driver needed for the device.
  • USB device filtering – Restricting some USB devices from being forwarded from the endpoint to the virtual desktop. You specify which devices will be prevented from being forwarded: individual VID-PID device models, device families, such as storage devices, or devices from specific vendors.
  • USB device splitting – The ability to configure the USB device such that when connected to a View desktop leaves some of the USB interfaces local to the client endpoint, and other interfaces forwarded to the guest. This can result in an improved user experience of the device in a virtual environment.
  • USB Boolean settings – Simple “on” or “off” settings. For example, whether a specific feature is enabled (true) or disabled (false).

Download

Download out the full USB Device Redirection, Configuration, and Usage in View Virtual Desktops white paper.

Rating: 5/5


Feb 26

VMware Horizon View 6.0.2 and VMware Virtual SAN 6.0 Hybrid Reference Architecture

This is a reference architecture using VMware Horizon® View™ 6.0.2 running on VMware Virtual SAN™ 6.0 in a hybrid configuration and is based on realistic test scenarios, user workloads, and infrastructure system configurations. The architecture is comprised of SuperMicro rack mount servers with local storage to support a scalable and cost-effective VMware Horizon View linked-clone desktop deployment on VMware vSphere® 6.0.
Extensive user experience and operations testing, including use of Login VSI desktop performance testing of up-to 1,600 desktops, desktop provisioning operations of up-to 2,400 desktops, revealed world-class performance at an extremely low cost. VMware Virtual SAN technology allows easy scalability while maintaining superior performance at a competitive price point.

VMware reference architectures are built and validated by VMware and supporting partners. They are designed to address common use cases; examples include enterprise desktop replacement, remote access, business process outsourcing, and disaster recovery. A reference architecture describes the environment and workload used to simulate realistic usage, and draws conclusions based on that particular deployment.
This guide is intended to help customers—IT architects, consultants, and administrators—involved in the early phases of planning, design and deployment of Horizon View–based solutions. The purpose is to
provide a standard, repeatable, and highly scalable design that can be easily adapted to specific
environments and customer requirements.

The reference architecture “building block” approach uses common components to minimize support costs
and deployment risks during the planning of large-scale, Horizon View–based deployments. The building block approach is based on information and experiences from some of the largest VMware deployments in production today. While drawing on existing best practices and deployment guides pertinent to many of the individual specific components, the reference architectures are tested and validated in the field and described in detail.

Some key features that can help an organization get started quickly with a solution that integrates easily into existing IT processes and procedures include:

  • Standardized, validated, readily available components
  • Scalable designs that allow room for future growth
  • Validated and tested designs that reduce implementation and operational risks
  • Quick implementation, reduced costs, and minimized risk

Download

VMware Horizon View 6.0.2 and VMware Virtual SAN 6.0 Hybrid Reference Architecture.

Rating: 5/5


Feb 26

Virtual SAN 6.0 Design and Sizing Guide

Wrote by Cormac Hogan
Storage and Availability Business Unit
VMware, v 1.0.5/April 2015

Introduction

VMware® Virtual SAN™ is a hypervisor-converged, software-defined storage platform that is fully integrated with VMware vSphere®. Virtual SAN aggregates locally attached disks of hosts that are members of a vSphere cluster, to create a distributed shared storage solution. Virtual SAN enables the rapid provisioning of storage within VMware vCenter™ as part of virtual machine creation and deployment operations.

Virtual SAN is the first policy-driven storage product designed for vSphere environments that simplifies and streamlines storage provisioning and management. Using VM-level storage policies, Virtual SAN automatically and dynamically matches requirements with underlying storage resources. With Virtual SAN, many manual storage tasks are automated – delivering a more efficient and cost-effective operational model.

Virtual SAN 6.0 provides two different configuration options, a hybrid configuration that leverages both flash-based devices and magnetic disks, and an all-flash configuration. The hybrid configuration uses server-based flash devices to provide a cache layer for optimal performance while using magnetic disks to provide capacity and persistent data storage. This delivers enterprise performance and a resilient storage platform.

The all-flash configuration uses flash for both the caching layer and capacity layer. There area wide range of options for selecting a host model, storage controller as well as flash devices and magnetic disks. It is herefore extremely important that the VMware Compatibility Guide (VCG) is followed rigorously when selecting hardware components for a Virtual SAN design.

This document focuses on the helping administrators to correctly design and size a Virtual SAN cluster, and answer some of the common questions around number of hosts, number of flash devices, number of magnetic disks, and detailed configuration questions to help to correctly and successfully deploy a Virtual SAN.

Download

Download out the full VMware® Virtual SAN™ 6.0 Design and Sizing Guide.

Download also VMware® Virtual SAN™ 6.0 Administrators Guide.

Rating: 5/5


Feb 26

VMware Virtual SAN Design and Sizing Guide for Horizon View

VMware Virtual SAN™ is a hypervisor-converged, software-defined storage platform that is fully integrated with VMware vSphere®. Virtual SAN aggregates locally attached disks of hosts that are members of a vSphere cluster, to create a distributed shared storage solution. Virtual SAN enables the rapid provisioning of storage within VMware vCenter™ as part of virtual machine creation and deployment operations.

Virtual SAN uses a hybrid disk architecture that leverages both flash-based devices for performance and magnetic disks for capacity and persistent data storage. This hybrid architecture delivers a scale-out storage platform with enterprise performance and resiliency at a compelling price point.

The distributed datastore of Virtual SAN is an object-store file system that leverages the vSphere Storage Policy–Based Management (SPBM) framework to deliver application-centric storage services and capabilities that are centrally managed through vSphere virtual machine storage policies.
This document focuses on the definitions, sizing guidelines, and characteristics of the Virtual SAN distributed datastore for Horizon™ View™ virtual desktop infrastructures.

Download

Download out the full VMware® Virtual SAN™ Design and Sizing Guide for Horizon View Virtual Desktop Infrastructure.

Rating: 5/5


Feb 26

PCoIP Optimization Best Practices (VMware View)

Understand the PCoIP protocol that powers VMware View desktops. Learn how to tune the PCoIP protocol for different workloads to optimize performance.

Rating: 5/5


Nov 03

Installing vCenter Server 6.0 best practices (KB2107948)

Purpose

This article provides information about installing VMware vCenter Server 6.0 Best practices.

Note: This article assumes that you have read the vSphere 6.0 Installation and Setup documentation.
The documentation contains definitive information. If there is a discrepancy between the documentation and this article, assume that the documentation is correct. Ensure that you have read the known issues in the VMware vSphere 6.0 Release Notes.

Resolution

Deployment models and Platform Services Controllers

With vSphere 6.0, we have introduced deployment models and Platform Services Controllers. There are two deployment models:

    ■ Embedded Deployment Model:
    – vCenter Server with an embedded Platform Services Controller
    ■ External Deployment Model:
    – External Platform Services Controller.
    vCenter Server with external Platform Services controller.

For more information about when to use the external or embedded deployment models, see vCenter Server Deployment Models.

Warning: Although the installation can complete successfully, some resulting topologies may not be recommended by VMware. For a list of recommended topologies and mitigation steps, see List of recommended topologies for vSphere 6.0.x (2108548).

Warning: If you decide on an external model, the installation must be done sequentially starting with the platform services controllers, not concurrently.

If you have used vSphere 5.x you may be familiar with some services that were previously deployed independently of vCenter Server. For more information about how the vSphere 5.x services map to the new vSphere 6.0 servers, see Migration of Distributed vCenter Server for Windows Services During Upgrade to vCenter Server 6.0 in our documentation.

For further related installation information,see the vCenter Server 6.0 documentation subtopics:

vCenter Server Components and Services
vCenter Server Deployment Models
Overview of the vSphere Installation and Setup Process
vSphere Security Certificates Overview
Enhanced Linked Mode Overview

vCenter Server for Windows Requirements

To install vCenter Server on a Windows virtual machine or physical server, your system must meet specific hardware and software requirements. For more information, see also following related Prerequisites.

vCenter Server for Windows Hardware Requirements
vCenter Server for Windows Storage Requirements
Required free space for system logging
vCenter Server for Windows Software Requirements
Supported host operating systems for VMware vCenter Server installation (including vCenter Update Manager and vRealize Orchestrator) (2091273)
Verify that the FQDN is resolvable
Synchronizing Clocks on the vSphere network
vCenter Server for Windows Database Requirements
VMware Product Interoperability Matrix

vCenter Server Appliance Requirements

The appliance upgrade process has changed significantly from the previous release. There is now an ISO file containing the appliance that can be downloaded. The ISO contains an installer that will deploy the appliance to a host running ESXi 5.1.x or later. The tool runs in a browser on a Microsoft Windows Platform. The tool requires the client integration plugin to function and therefore must meet it’s requirements. For more information about installing the Client Integration Plug-in Software on your windows platform, see Install Client Integration Plug-in. For information about the appliance upgrade tool, see Upgrade the vCenter Server Appliance.

Prior to installing vSphere 6.0, please ensure you have these requirements:

vCenter Server Appliance Hardware Requirements
vCenter Server Appliance Storage Requirements
Verify that the FQDN is resolvable
Synchronizing Clocks on the vSphere network
vCenter Server Appliance Software Requirements
vCenter Server Appliance Database Requirements
Software Included in the vCenter Server Appliance

vSphere Web Client Software Requirements

Make sure that your browser supports the vSphere Web Client.
The vSphere Web Client 6.0 requires Adobe Flash Player 16 or later. The latest Adobe Flash Player version for Linux systems is 11.2. Therefore, the vSphere Web Client cannot run on Linux platforms.
VMware has tested and supports these guest operating systems and browser versions for the vSphere Web Client. For best performance, use Google Chrome.

vSphere Web Client Software Requirements

If you plan to install the Client Integration Plug-in separately from the vSphere Web Client so that you can connect to an ESXi host and deploy or upgrade the vCenter Server Appliance, make sure that your browser supports the Client Integration Plug-in.
To use the Client Integration Plug-in, verify that you have one of the supported Web browsers.

vSphere Client Requirements

You can install the vSphere Client to manage single ESXi host. The Windows system on which you install the vSphere Client must meet specific hardware and software requirements.

vSphere Client Hardware Requirements
vSphere Client Software Requirements
Supported host operating systems for vSphere Client (Windows) installation (2100436)
TCP and UDP Ports for the vSphere Client

Additional Information

Walkthroughs
If you would like to see step-by-step upgrade instructions with annotations, see go to our Product Walkthroughs.

Frequently Asked Questions

Q: Are Oracle databases supported as an external database for vCenter Server Appliance?
A: Oracle 11g and 12c are supported as external databases in vCenter Server Appliance 6.0

Q: Will Oracle databases continue to be supported beyond vSphere 6.0 for vCenter Server Appliance?
A: Oracle 11g and 12c as an external database for vCenter Server Appliance is deprecated in the 6.0 release. VMware will provide general support as per lifecycle support policy.

Q: What is the plan to support Microsoft SQL databases with vCenter Server Appliance 6.0?
A: There is no plan to support Mircosoft SQL database as an external database. Embedded vPostgres database supports scale limits of 1,000 hosts and 10,000 virtual machines.

Q: Can I deploy vCenter Server Appliance using OVFTOOL?
A: Starting with vSphere 6.0, the user interface based installer is the only supported method to install and upgrade vCenter Server Appliance.

Q: Why can’t I deploy vCenter appliance on vCenter Server cluster?
A: This feature is planned for a future release.

Q: What is considered too large for an embedded deployment model?
A: An embedded deployment model can only include a single instance of vCenter Server. If your requirements exceed the limits of a single instance of vCenter Server, you should consider using an external model.

Q: What is considered too small for an external deployment model?
A: If you only require a single instance of vCenter Server and you do not expect the environment to grow, you should consider an embedded model.

Rating: 5/5