The purpose of this document is to explain how to size bandwidth requirements for Virtual SAN in Stretched Cluster configurations. This document only covers the Virtual SAN network bandwidth requirements.
In Stretched Cluster configurations, two data fault domains have one or more hosts, and the third fault domain contains a witness host or witness appliance. In this document each data fault domain will be referred to as a site.
Virtual SAN Stretched Cluster configurations can be spread across distances, provided bandwidth and latency requirements are met.
The bandwidth requirement between the main sites is highly dependent on the workload to be run on Virtual SAN, amount of data, and handling of failure scenarios. Under normal operating conditions, the basic bandwidth requirements are:
Bandwidth Requirements Between Sites
Workloads are seldom all reads or writes, and normally include a general read to write ratio for each use case.
A good example of this would be a VDI workload. During peak utilization, VDI often behaves with a 70/30 write to read ratio. That is to say that 70% of the IO is due to write operations and 30% is due to read IO. As each solution has many factors, true ratios should be measured for each workload.
Using the general situation where a total IO profile requires 100,000 IOPS, of which 70% are write, and 30% are read, in a Stretched configuration, the write IO is what is sized against for inter-site bandwidth requirements.
With Stretched Clusters, read traffic is, by default, serviced by the site that the VM resides on. This concept is called Read Locality.
The required bandwidth between two data sites (B) is equal to Write bandwidth (Wb) * data multiplier (md) * resynchronization multiplier (mr):
B = Wb * md * mr
The data multiplier is comprised of overhead for Virtual SAN metadata traffic and miscellaneous related operations. VMware recommends a data multiplier of 1.4
The resynchronization multiplier is included to account for resynchronizing events. It is recommended to allocate bandwidth capacity on top of required bandwidth capacity for resynchronization events.
Making room for resynchronization traffic, an additional 25% is recommended.
Bandwidth Requirements Between Witness & Data Sites
Witness bandwidth isn’t calculated in the same way as inter-site bandwidth requirements. Witnesses do not maintain VM data, but rather only component metadata.
It is important to remember that data is stored on Virtual SAN in the form of objects. Objects are comprised of 1 or more components of items such as:
- VM Home or namespace
- VM Swap object
- Virtual Disks
Objects can be split into more than 1 component when the size is >255GB, and/or a Number of Stripes (stripe width) policy is applied. Additionally, the number of objects/components for a given Virtual Machine is multiplied
when a Number of Failures to Tolerate (FTT) policy is applied for data protection and availability.
The required bandwidth between the Witness and each site is equal to ~1138 B x Number of Components / 5s.
Download a full VMware® Virtual SAN™ Stretched Cluster – Bandwidth Sizing Guidance technical white paper.
Using a supported storage controller and firmware is important in a VSAN deployment to ensure normal operations, optimal performance, and to reduce the chances of hardware/firmware issues. This tool can be useful to ensure that a storage device and its firmware went through certification testing supported by VMware and its partners.
Some scenarios where the tool can be useful:
- Verify if new server and storage adapter are supported for a VSAN deployment
- Verify if re-purposed server, storage adapter are supported for a VSAN deployment
For a full VSAN system check, please install the VSAN Health Check Plugin after a VSAN deployment.
- Windows: XP, 7, 8, 10, Server 2008, 2012
- Browser: IE 8 and above
- HTTPS/443 access to ESXi hosts (interacting with hostd)
- TCP/5989 access to ESXi hosts (CIM service secure)
- Internet HTTP/80 access to http://partnerweb.vmware.com/service/vsan/all.json (optional)
ESXi Host Requirements
- ESXi 5.1.x, 5.5.x, 6.x are supported
- Direct access to the hostd service with username and password
- CIM service (secure) running on port 5989
- CIM provider for HP or LSI controller
- Extract the contents and double-click on hclCheck.exe
hclCheck [-h] [–hostname hostname [hostname …]] [–username USERNAME]
[–password PASSWORD] [–hcl-url HCL_URL]
Check ESXi host against VSAN HCL
-h, –help show this help message and exit
–hostname hostname [hostname …]
Hostname/IP of ESXi host
–username USERNAME Username (default: root)
–password PASSWORD Password
–hcl-url HCL_URL URL to VSAN HCL DB (http://path/to/hcl.db or file:///C:/path/to/hcl.db)
Download VSAN Hardware Compatibility List Checker v7 VMware FLINGS.
Download a full VSAN 6.2 Design Thoughts Mindmap.
Today we are officially announcing the launch of VMware Virtual SAN 6.2. Virtual SAN is the foundation of VMware’s Hyper-Converged Software (HCS) that enables customers to experience industry leading hyper-converged infrastructure. Customers are finding VMware HCS key in adopting a Software Defined Data Center (SDDC) approach of streamlining and automating storage, networking and compute. Learn more about VMware HCS here.
Along with the Virtual SAN 6.2 launch, we are unveiling new extensions to the Virtual SAN Ready Node program enabling OEMs to offer customers the ability to build VMware HCS–based solutions with maximum flexibility of hardware, software, licensing, and support.
Now in its 4th edition, Virtual SAN is the market leader in Hyper-Converged Infrastructure. Over 3,000 customers of all industries and sizes trust Virtual SAN to run their most mission critical applications. Virtual SAN 6.2 adds greater abilities around lower cost and improving management and monitoring for the most demanding customer storage environments. In this new release, VMware continues to focus on providing a reliable and consistent storage environment for the biggest business critical applications.
VMware is expanding the capabilities of Virtual SAN as a platform and is introducing better data efficiency features by delivering deduplication and compression of data as well as providing RAID-5/RAID-6 support for all flash Virtual SAN environments.
A critical aspect of Virtual SAN is ensuring availability and visibility. Virtual SAN 6.2 Quality of Service (QoS) further enables the ability to prevent noisy neighbors and tenets from impacting competing workloads. Enhancements to usability and management provide for better visibility and proactive operations. The most significant new features and capabilities of Virtual SAN 6.2 are described below in detail:
Data Efficiency (Deduplication and Compression)
Dedupe and compression happens during de-staging from the caching tier to the capacity tier. You enable “space efficiency” on a cluster level and deduplication happens on a per disk group basis. Bigger disk groups will result in a higher deduplication ratio. After the blocks are deduplicated, then they are compressed. A significant saving already, but combined with deduplication, the results achieved can be up to 7x space reduction, of course fully dependent on the workload and type of VMs.
RAID-5/RAID-6 – Erasure Coding
Sometimes RAID 5 and RAID 6 over the network is also referred as erasure coding. In this case, RAID-5 requires 4 hosts at a minimum as it uses a 3+1 logic. With 4 hosts, 1 can fail without data loss. This results in a significant reduction of required disk capacity. Normally a 20GB disk would require 40GB of disk capacity, but in the case of RAID-5 over the network, the requirement is only ~27GB. There is another option if higher availability is desired. Learn more about the use of Erasure Coding in Virtual SAN 6.2.
Quality of Service (QoS)
This enables per VMDK IOP Limits. They can be deployed by Storage Policy-Based Management (SPBM), tying them to existing policy frameworks. Service providers can use this to create differentiated service offerings using the same cluster/pool of storage. Customers wanting to mix diverse workloads will be interested in being able to keep workloads from impacting each other.
Software Checksum will enable customers to detect the corruptions that could be caused by hardware/software components including memory, drives, etc. during the read or write operations. In the case of drives, there are two basic kinds of corruption. The first is “latent sector errors”, which are typically the result of a physical disk drive malfunction. The other type is silent corruption, which can happen without warning (These are typically called silent data corruption). Undetected or completely silent errors could lead to lost or inaccurate data and significant downtime. There is no effective means of detection without end-to-end integrity checking.
Virtual SAN can now support IPv4-only, IPv6-only, and also IPv4/IPv6-both enabled. This addresses requirements for customers moving to IPv6 and, additionally, supports mixed mode for migrations.
Performance Monitoring Service
Performance Monitoring Service allows customers to be able to monitor existing workloads from the vCenter. Customers needing access to tactical performance information will not need to go to vRO. Performance monitor includes macro level views (Cluster latency, throughput, IOPS) as well as granular views (per disk, cache hit ratios, per disk group stats) without needing to leave vCenter. The performance monitor allows aggregation of states across the cluster into a “quick view” to see what load and latency look like as well as share that information externally directly to 3rd party monitoring solutions by API.The Performance monitoring service runs on a distributed database that is stored directly on Virtual SAN.
Virtual SAN has even more storage greatness on the way! For more information visit the VMware Virtual SAN Product page and feel free to check out the Technical White paper.
The next step is try it yourself! The Virtual SAN Hands-on-Labs gives you an opportunity to experiment with many of the key features of Virtual SAN. To find out where Virtual SAN would fit in your environment, sign up for a VSAN Assessment. This free, one-week analysis can reveal where Virtual SAN can reduce cost, eliminate complexity, and increase your agility.
For future updates on Virtual SAN (VSAN) be sure to check Virtual Blocks, as well as follow me on twitter: @Lost_Signal
- Embedded in vSphere®
- Storage Policy-Based Management Framework
- High Performance with Flash Acceleration
The storage market is inundated by new technologies and architectures. In March of 2014, it went in an entirely new direction. That is when VMware introduced Virtual SAN: the first VMware entry into the “software-defined storage” product category. Virtual SAN is optimized for VMware vSphere® environments and is doing for storage what VMware vSphere did for compute.
Since its introduction, Virtual SAN has captured a lot of industry attention, winning top awards at the InterOp and TechEd conferences. VMware Virtual SAN™ is a new approach to storage. Virtual SAN applies the principles of the VMware software-defined data center to storage to create a high-performance, cost-effective alternative for virtualized environments.
What is so special about it? Here are seven reasons why Virtual SAN is a truly unique storage solution that enterprises should incorporate into their storage strategy.
Download a full Sever Reasons to Consider Virtual SAN
VMware Virtual SAN Ready Nodes
The purpose of this document is to provide VMware Virtual SAN™ Ready Node configurations from OEM vendors. Virtual SAN Ready Node is a validated server configuration in a tested, certified hardware form factor for Virtual SAN deployment,jointly recommended by the server OEM and VMware. Virtual SAN Ready Nodes are ideal as hyper-converged building blocks for larger data center environments looking for automation and a need to customize hardware and software configurations.
Virtual SAN Ready Node is a turnkey solution for accelerating Virtual SAN deployment with following benefits:
1. Complete server configurations jointly recommended by the server OEM and VMware
- Complete with the size, type and quantity of CPU, Memory, Network, I/O Controller, HDD and SSD combined with a certified server that is best suited to run a specific Virtual SAN workload
- Most Ready Nodes come pre-loaded with vSphere and Virtual SAN if user decides to quote/order as-is
2. Easy to order and faster time to market
- Single orderable “SKU” per Ready Node
- Can be quoted/ordered as-is or customized
3. Benefit of choices
- Work with your server OEM of choice
- Choose the Ready Node profiles based on your workload
- New license sales or for customers with ELA
The Virtual SAN Ready Nodes listed in this document are classified into HY-2 Series, HY-4 Series, HY-6 Series, HY-8 Series, AF-6 Series and AF-8 Series. The solution profiles are defined based on different levels of workload requirements for performance and capacity and each solution profile provides a different price/performance focus.
For guidelines on the hardware choices of a Virtual SAN solution, along with the infrastructure sizing assumptions and design considerations made to create the sample configurations, please refer to the Virtual SAN Hardware Quick Reference Guide.
In order to choose the right Virtual SAN Ready Node for your environment, follow this two-step process:
1. Refer to the Virtual SAN Hardware Quick Reference Guide. for guidance on how to identify the right solution profile category for your workload profile and the category of Ready Node that meets your needs
2. Choose Ready Nodes from your vendor of choice listed in this document that correspond to the solution profile category that you identified for your workload
Note: The Virtual Machine profiles including number of Virtual Machines per desktop are based on Virtual SAN 5.5. The Virtual SAN 6.0 numbers will be available after GA.
Download out the full Virtual SAN Compatibility Guide Ready Nodes technical white paper.
Confirm your choice VMware Virtual SAN Hardware Compatibility Guide.
Virtual SAN 6.0 introduces a new on-disk format that includes VirstoFS technology. This always-sparse filesystem provides the basis for a new snapshot format, also introduced with Virtual SAN 6.0, called vsanSparse. Through the use of the underlying sparseness of the filesystem and a new, in-memory metadata cache for lookups, vsanSparse offers greatly improved performance when compared to previous virtual machine snapshot implementations.
Introducing vsanSparse snapshots
As mentioned in the introduction, Virtual SAN 6.0 has a new on-disk (v2) format that facilitates the introduction of a new type of performance-based snapshot. The new vsanSparse format leverages the underlying sparseness of the new VirstoFS filesystem (v2) on-disk format and a new in-memory caching mechanism for tracking updates. This v2 format is an always-sparse file system (512-byte block size instead of 1MB block size on VMFS-L) and is only available with Virtual SAN 6.0.
When a virtual machine snapshot is created on Virtual SAN 5.5, a vmfsSparse/redo log object is created (you can find out more about this format in appendix A of this paper). In Virtual SAN 6.0, when a virtual machine snapshot is created, vsanSparse “delta” objects get created.
Why is vsanSparse needed?
The new vsanSparse snapshot format provides Virtual SAN administrators with enterprise class snapshots and clones. The goal is to improve snapshot performance by continuing to use the existing redo logs mechanism but now utilizing an “inmemory” metadata cache and a more efficient sparse filesystem layout.
How does vsanSparse work?
When a vsanSparse snapshot is taken of a base disk, a child delta disk is created. The parent is now considered a point-in-time (PIT) copy. The running point of the virtual machine is now the delta.New writes by the virtual machine go to the delta but the base disk and other snapshots in the chain satisfy reads. To get current state of the disk, one can take the “parent” disk and redo all writes from “children” chain.
Thus children are referred to as “redo logs”. In this way, vsanSparse format is very similar to the earlier vmfsSparse format.
Download out the full vsanSparse – TechNote for Virtual SAN 6.0 Snapshots.
VMware customers love the simplicity, performance, and integration of VMware Virtual SAN since its launch. Most customers choose to evaluate Virtual SAN before using it for production – always a good idea. We’ve made a list of issues occasionally encountered as people go through this process.
Follow this guide, and you’ll have a great evaluation.
Before You Start
Plan on testing a reasonable hardware configuration that resembles what you plan to use in production. Please refer to the VSAN 6.0 Design and Sizing Guide for information on supported hardware configurations, and consideration when deploying Virtual SAN. The VSAN 6.0 Design and Sizing Guide can be found here: VMware® Virtual SAN™ 6.0 Design and Sizing Guide.
All Flash or Hybrid
There are a number of additional considerations if you plan to deploy an all-flash Virtual SAN solution.
■ All-flash is available in Virtual SAN 6.0 only
■ It requires a 10Gb Ethernet network; it is not supported with 1Gb NICs.
■ The maximum number of all-flash hosts is 64
■ Flash devices are used for both cache and capacity
■ Flash read cache reservation is not used with all-flash configurations
■ There is a need to mark a flash device so it can be used for capacity – this is covered in the VMware® Virtual SAN™ 6.0 Administrators Guide.
■ Endurance now becomes an important consideration both for cache and capacity layers
Download out the full Virtual SAN 6.0 Proof of Concept Guide
VMware® Virtual SAN™ is the industry’s first scale-out, hypervisor-converged storage solution based on the industry-leading VMware vSphere® solution. Virtual SAN is a software-defined storage solution that enables great flexibility and vendor choice in hardware platform.
This document provides guidance regarding hardware decisions—based on creating Virtual SAN solutions using VMware Compatibility Guide–certified hardware—when designing and deploying Virtual SAN. These decisions include the selection of server form factor, SSD, HDD, storage controller, and networking components.
This document does not supersede the official hardware compatibility information found in the VMware Compatibility Guide, which is the single source for up-to-date Virtual SAN hardware-compatibility information and must be used for a list of officially supported hardware.
When designing a Virtual SAN cluster from a sum of VMware Compatibility Guide–certified vendor components, this guide should be used in combination with the VMware® Virtual SAN™ Design and Sizing Guide. the Virtual SAN sizing tool, and other official Virtual SAN documentation from VMware Technical Marketing and VMware Technical Publications.
VMware Virtual SAN is a groundbreaking storage solution that enables unprecedented hardware configuration flexibility through building an individual solution based on preferred server vendor components. The guidance provided in this document enables users to make the best choice regarding their particular storage needs for their software-defined datacenter based on VMware vSphere. When selecting hardware components for a Virtual SAN solution, users should always utilize the VMware Compatibility Guide as the definitive resource tool.
Download out the full VMware® Virtual SAN™ Hardware Guidance technical white paper.
The purpose of this document is to provide sample server configurations as directional guidelines for use with VMware® Virtual SAN™. Use these guidelines as your first step toward determining the configuration for Virtual SAN.
How to use this document
1. Determine your workload profile requirement for your use case.
2. Refer to Ready Node profiles to determine the approximate configuration that meets your needs.
3. Use VSAN Hardware Compatibility Guide to pick a Ready Node aligned with the selected profile from the OEM server vendor of choice.
For more detail on Virtual SAN Design guidance, see
1. Virtual SAN Ready Node Configurator
2. Virtual SAN Hardware Guidance
3. VMware® Virtual SAN™ 6.0 Design and Sizing Guide.
4. Virtual SAN Sizing Calculator
5. VSAN Assessment Tool
Download out the full Virtual SAN Hardware Quick Reference Guide technical white paper.