Learn about Tips for Selling VMware Virtual SAN.
Virtual SAN is radically simple hyper-converged storage for virtualized environments. This short video demonstrates the simple, two-click installation and easy configuration steps. To download a free evaluation version of VMware Virtual SAN, visit www.vmware.com/virtual-san.
This video demonstrates the configuration of a 16 node Virtual SAN cluster with the vSphere Distributed Switch.
VMware Virtual SAN creates end-to-end visibility between the virtual layer and the storage layer. VSAN used a hyper-converged storage architecture, enabling compute and storage resources to be delivered through a common VMware virtualized platform. Best of all, VSAN is easy to scale, and provides the best price to performance. Learn more about VMware Virtual SAN http://www.vmware.com/go/virtual-san.
VMware’s Virtual SAN radically simplifies storage in vSphere clusters. Even though the product has been available for about a year, we still continue to get all sorts of interesting questions.
What follows are the list of interesting questions we encounter most often, along with quick answers.
I hope you find this helpful!
1. How does VSAN use external storage arrays?
VSAN doesn’t use external storage arrays — it uses server-based disk drives and flash devices to create its own shared and protected storage pool. The environment is much simpler and more cost-effective as a result.
However, a given cluster can use both VSAN and external storage arrays at the same time — and there are useful capabilities to place workloads intelligently, move them as needed, etc.
2. What storage protocols does VSAN support? E.g. iSCSI, NFS, etc
As VSAN only communicates with vSphere virtual machines, there’s really no need for a standard storage protocol. VSAN uses a proprietary protocol within the cluster that’s more efficient than the familiar choices.
If you might want to expose VSAN storage outside of the vSphere cluster, there are some good third-party choices for that.
3. Can VSAN support any server, disk, flash, IO controller, etc.?
There’s a long and constantly-growing list of supported components, but — no — not *everything* you might encounter in the wild. It’s vitally important to use only listed components, drivers and firmware — especially if you don’t want nasty surprises.
However, we strongly recommend that customers use VSAN ReadyNodes as a starting point — either purchase them as configured, or use them as a starting point for your own design.
4. All-flash VSAN looks interesting — how do you do read caching?
Only the hybrid version of VSAN (mixed disks and flash) does read caching. The all-flash version doesn’t need to do read caching, as flash drives are plenty fast enough on their own.
For the all-flash version, a small amount of write cache is used to greatly lessen the wear on less-expensive capacity-oriented flash devices. In each case, the general sizing recommendation is the same: 10% of consumed capacity.
5. What makes VSAN so fast?
A number of things, really. First, VSAN is fully integrated into the vSphere kernel. That means optimized IO paths and less resource usage. Second, there’s no need for IOs to traverse the typical storage network and array controller to get work done — everything is integral to the server cluster.
And, finally, there’s some pretty smart storage software at work behind the scenes.
6. How does VSAN protect my data?
There’s much more detail available, of course, but these are the core concepts.
7. I ran a single VM against a single VMDK, and didn’t find performance all that impressive. What’s going on?
That’s not a surprise, really. VSAN is designed to be a shared storage service for an entire cluster, and not individual VMs. Many storage arrays will spread a given VMDK around many disks using striping or their file system, which gives great performance to a single VM, but loses effectiveness when there are many VMs all competing for the same capacity devices.
You can achieve a similar effect with VSAN by setting the striping policy to a higher number. This will be effective for smaller numbers of VMs, but loses effectiveness if the entire cluster is busy. However, we’d encourage you to test what you actually plan to use — which most often is multiple VMs on a cluster, each doing different things.
8. My storage team is interested in managing VSAN — what tools do you have for them?
That’s one of it’s unique strengths — no specialized storage skills required.
However, in some situations, there’s a preference to have the storage team continue to do this job. Today, they would have to use the same tools as the vSphere administration team. A few customers have decided to go this way, and it seems to be working for them.
9. Why don’t you have deduplication and compression?
Today (May 29, 2015), VSAN doesn’t support that feature. That being said, it is no panacea. We’ve worked the TCO numbers, and unless your cluster is getting an outrageous level of deduplication or compression (e.g. >80%), VSAN will usually be more cost-effective, not to mention faster and easier to manage. Remember, VSAN components are priced as server storage, not array storage — so there’s a big differential.
If you do have part of your environment that could really benefit from dedupe and/or compression, there’s no reason why it couldn’t be stored on a device that does that today right alongside VSAN.
10. What backup products do you support?
There’s a long list — basically anything that supports standard VADP as a backup interface. Ask your favorite vendors.
11. What remote replication products do you support?
As VSAN doesn’t use external arrays, it doesn’t support array-based replication — all replication has to be done at the host and at an individual VM level.
In addition to VMware’s own vSphere Replication, there are also options from EMC (RecoverPoint for VMs) as well as Zerto.
12. Where are the “gotchas” for VSAN?
Generally speaking, most customer experiences have been very positive, but there are two areas worth calling out.
The first is the HCL, especially drivers and firmware. A bad IO controller driver can really ruin your day. We’ve published a list of what we’ve tested and we know works reliably, so stick to that — not only on day 1, but through the entire life of your cluster.
The second is sizing. Some VMware admins try to design their own configs, and are rather new to storage. They might not understand the performance implications of using, say, a single 4TB 7200 RPM NL-SAS drive vs. four 1TB 10K SAS drives — or might try to use super low-end components.
The vast majority of people are better off with pre-configured VSAN ReadyNodes for different workload profiles that you can either buy as a SKU, or use as a starting point for your own configuration. We also have a thorough design and sizing guide if you’d like to understand the theory behind the recommendations.
13. You say that VSAN is ready for tier 1 enterprise workloads — why?
That claim is the result of a year of real-world customer experience coupled with our own internal testing. VSAN is clearly a rock-solid product that delivers more than enough performance, reliability and availability to do the job. We have plenty of customers who are running very demanding production workloads on VSAN, and are very pleased with the results.
Additionally, we’ve got a few reference architectures for popular enterprise apps published today, and there’s more coming soon.
14. We’re looking at a VDI environment, and VSAN keeps coming up. What’s the win?
That’s no surprise — Horizon View and VSAN were made to work with each other.
VSAN helps you avoid an external array. All at once, you’ve got an environment that simpler, less expensive and easier to manage. That’s a big win, right there.
When VSAN was being designed, VDI workloads were studied carefully and incorporated into the architecture from both a performance and manageability viewpoint.
Generally speaking, hybrid VSAN configs can approach the performance of entry-level all-flash arrays, and all-flash VSAN configs easily go toe-to-toe with higher-end all-flash arrays.
However, for many customers, manageability is the big win. As the VDI admin creates user pools and assigns policies for performance, protection and persistence — all of that just flows downwards to VSAN via a shared policy mechanism. That means that the admin can easily reconfigure their VDI environment without having to explicitly reconfigure an external storage array. We hear that’s a pretty cool feature.
15. I’m not into designing my own VSAN-based cluster — it looks complicated. What are my options?
If you’d like something with an extremely simple, out-of-the-box experience, take a look at EVO:RAIL –available from almost all the larger hardware vendors — arguably the simplest virtual infrastructure option on the market today.
If you’ve got some basic vSphere skills or want to do a bit of customization, take a look at the 40+ VSAN ReadyNodes that are designed for specific workload profiles, again available directly from our hardware partners. There’s a bit more work (e.g. installing drivers, etc.) but nothing too difficult.
By the way, there is a nifty sizing and TCO calculator here.
16. My storage team has concerns about VSAN — what should I say?
Well, the goal of VSAN was to make storage essentially “disappear” from the perspective of the vSphere administrator: very simple, no special skills required.
The industry has been using the external storage array model for over twenty years, and — by comparison — VSAN doesn’t look like an external storage array, so there’s that. And storage people can be very conservative by nature.
On the other hand, vSphere admins are pretty adamant about the need for change. They point out it’s wasteful and inefficient to have to go to the storage team for each and every thing they need. Why not let the vSphere admin do storage?
The debate ends up being around two things:
- Are the benefits worth introducing a new technology? Even a simple VSAN TCO analysis will open a lot of eyes — both capex and opex.
- Where does it best fit? — identifying the parts of the environment where it makes sense to continue with a traditional external storage model, and parts of the environment where it makes sense to collapse storage into the hypervisor with VSAN.
The real win for the storage team is that they now have more time to go work on things that require their specialized expertise vs. day-to-day routine operations.
17. We’re looking at VSAN as well as Nutanix. What would you emphasize?
First — performance. All of our head-to-head testing (as well as customer testimonials) shows that VSAN has a stunning performance advantage on identical hardware with most demanding datacenter workloads, and uses less memory and CPU as well — so better consolidation ratios.
Second — operational simplicity. With VSAN, everything is managed through vCenter and a single interface designed to be used and supported as a whole. The administrative workflows are far simpler and more obvious as a result — no need to go back and forth as you work with two products and two vendors. People have used both generally agree with this observation.
Third — feature support. For example, vSphere’s DRS is a popular feature that rebalances cluster workloads. As Nutanix depends on data locality, that can create adverse performance effects as VMs are moved and their data attempts to follow them around. Maintenance mode is another example — the evacuation and reprotection of data is automated with VSAN, but a manual set of steps with Nutanix.
Fourth — cost. Everyone looks at different pricing, but — generally speaking — an environment with vSphere and VSAN will cost less (hardware and software) than an environment with vSphere and Nutanix.
Fifth — choice. If you already have a preferred server vendor, or are looking for a tailored configuration, VSAN gives you a wide world of hardware choices.
18. How fast can VSAN go? How big can it get? Does it show linear scalability?
VSAN is software — scale and performance is mostly a function of the hardware you bring: CPU, memory, network, flash, controllers, etc. Every time the hardware gets faster, VSAN gets faster as a result — and there’s plenty of cool new hardware always coming to market.
As far as maximum size, the math is easy: up to 64 servers in a cluster, and each server supports up to 35 capacity devices (five disk groups, seven capacity devices each). A bit of quick math yields a max of 2240 capacity devices. Using 4TB drives, that’s just shy of 9PB raw in a single cluster. Probably more than you need.
We’ve published multiple tests that show linear scalability as you add more nodes. Performance also scales as you put more devices into each server: disk groups, flash cache and capacity devices — scale up as well as scale out.
19. Can the cluster network be a limitation to performance?
Not really. Although we support 1Gb, 10Gb is highly recommended. Our internal testing shows that you need to get into nosebleed multi-million IOPS territory before network overhead even starts to become a factor. For those folks (and they are out there!), there’s 40Gb. We also have a deep-dive network design guide if you’re doing a very large multi-rack cluster.
20. What about blade servers?
Blade servers were designed for external storage, so they don’t have a lot of internal capacity for storage devices. VMware has qualified a few SAS-connected external storage enclosures, with more coming.
However, some of our more adventurous customers are experimenting with MCS (memory channel storage) which is essentially flash storage right on the motherboard. They are seeing great densities as well as great performance.
As this technology matures, blade server designs should get more interesting with hypervisor-converged storage solutions like VSAN.
21. I want to share VSAN storage across multiple clusters — how do I do this?
Sorry, that’s not how it was designed to be used. We found that most customers think in terms of designing individual clusters, and VSAN respects that design boundary. If you want to have a large storage pool shared across multiple clusters, you’re back to a dedicated shared storage model — using storage specialists — and a lot of the operational benefit disappears as a result.
However, as mentioned above, there are third party products that can expose a VSAN cluster’s capacity to other entities via NFS or iSCSI.
22. How do I protect against rack failures?
VSAN 6.0 introduced a new feature — fault domains — that ensures placement of protection components across separate racks. It’s pretty easy to use, and it’s very effective.
23. What is this Health Check thing I’m hearing about?
It’s a new feature in VSAN 6.0, and makes it much easier for a vSphere administrator to quickly ascertain that the VSAN environment is healthy: storage devices, networks, resources, driver and firmware versions, etc. — and if there’s a problem found, what to do about it.
Every VSAN 6.0 customer should be using it.
24. Your roadmap seems to be moving pretty fast — how can I get a view into future releases?
Yes, the roadmap is moving pretty fast — and that’s the direct result of our great engineers! Standard VMware process is to request this through your sales team, who can arrange a briefing from one of our product managers or similar under NDA.
This new reference architecture is base on the latest versions of VMware Horizon View 6.0.2 and VMware Virtual SAN 6.0. Virtual SAN’s 6.0 hybrid storage architecture is the focus of the storage design and configuration with real-world test scenarios, user workloads, and infrastructure system configurations.
The hardware utilized in this reference architecture is based on Extreme Ethernet switches and SuperMicro rack mount servers with locally attached storage devices designed to support a scalable architecture and a cost-effective linked-clone desktop deployment model on VMware vSphere 6.0.
This technical paper highlights the results collected from the extensive user experience and operations testing performed, including Login VSI and desktop performance testing of up-to 1,600 desktops, and desktop provisioning operations of up-to 2,400 desktops.
The performed tests reveal the world-class performance and value of the solution at low cost. Virtual SAN’s technology allows easy scalability while maintaining superior performance at a competitive price point. The official document will be publicly available soon from the VMware technical resources page.
In the meantime, you can get early access to the final draft of the white paper directly from the link below.
The book can be found here Horizon View 6.0.2 VMware Virtual SAN 6.0 Hybrid Reference Architecture
A technical white paper about Virtual SAN performance has been published. This paper provides guidelines on how to get the best performance for applications deployed on a Virtual SAN cluster.
In addition to these workloads, we studied Virtual SAN caching tier designs and the effect of Virtual SAN configuration parameters on the Virtual SAN test bed.
Virtual SAN 6.0 can be configured in two ways: Hybrid and All-Flash. Hybrid uses a combination of hard disks (HDDs) to provide storage and a flash tier (SSDs) to provide caching. The All-Flash solution uses all SSDs for storage and caching.
Tests show that the Hybrid Virtual SAN cluster performs extremely well when the working set is fully cached for random access workloads, and also for all sequential access workloads. The All-Flash Virtual SAN cluster, which performs well for random access workloads with large working sets, may be deployed in cases where the working set is too large to fit in a cache. All workloads scale linearly in both types of Virtual SAN clusters—as more hosts and more disk groups per host are added, Virtual SAN sees a corresponding increase in its ability to handle larger workloads. Virtual SAN offers an excellent way to scale up the cluster as performance requirements increase.
Virtual SAN 6.0 introduced new changes to the structural components of its architecture. One of those changes is a new on-disk format which delivers better performance and capability enhancements. One of those new capabilities allows vSphere Admins to perform in-place rolling upgrades from Virtual SAN 5.5 to Virtual SAN 6.0 without introducing any application downtime.
Upgrading an existing Virtual SAN 5.5 cluster to Virtual SAN 6.0 is performed in multiple phases and it requires the re-formating of the of all of the magnetic disks that are being used in a Virtual SAN cluster. The upgrade is defined as a one-time procedure that is performed from RVC command line utility with a single command.
Upgrade Phase I: vSphere Infrastructure Upgrade
This phase of the upgrade is all components are upgraded to the vSphere 6.0 release. All vCenter Servers and ESXi hosts and all infrastructure related components need to be upgraded to version their respective and corresponding 6.0 software release. Any of the vSphere supported procedures for the individual components is supported.
Upgrade Phase II: Virtual SAN 6.0 Disk Format Conversion (DFC)
This phase is where the previous on-disk format (VMFS-L) is replaced on all of the magnetic disk devices with the new on-disk format (VSAN FS). The disk format conversion procedures will reformat the disk groups and upgrade all of the objects to the new version 2. Virtual SAN 6.0 provides supports for both the previous on-disk format of Virtual SAN 5.5 (VMFS-L) as well as its new native on-disk format (VSAN FS).
While both on-disk formats are supported, it is highly recommended to upgrade the Virtual SAN cluster to the new on-disk format in order to take advantage of the performance and new available features. The disk format conversion is performed sequentially performed in a Virtual SAN cluster where the upgrade takes place one disk group per host at a time. The workflow illustrated below is repeated for all disk groups on each host before the process moves onto another host that is a member of the cluster.
VMware is accelerating the SDS strategy with the introduction of Virtual SAN 6.0 and vSphere Virtual Volumes, watch to learn what’s new. VMware Partners can …
iTWire attends VMware’s launch of VSphere 6.0, VSAN 6 and more in Australia: One Cloud, Any application. Speakers include Bede Hackney – VMware Director of S…
NOTE: This video is roughly 37 minutes in length so it would be worth blocking out some time to watch it!