If you look beyond the traditional monolithic storage arrays that dominated the storage market for decades, there are some very interesting things going one.

One of the most popular movements is that of Software Defined Storage (SDS), and in particular, Hyperconverged Storage.

I've worked with SDS for 6+ years, but over the last 3 years or so, I've existed almost entirely in a hyperconverged storage world. I've run POC's of more hyperconverged storage platforms than I care to admit, and deployed hyperconverged storage on large production vSphere and Hyper-V clusters. There have been some great successes to this approach, but also spectacular failures. (Many future blog posts worth)

I have no doubt that a software approach to storage is a good thing, and the wider acceptance of this approach, combined with the ever decreasing price of flash, has made the storage world a very interesting one right now.

However, whilst the genuine benefits of hyperconvergence are easily sold, there are often significant trade-offs that are glossed over. (This is what happens when marketing and technology mixes - see ultra-thin TV's)

So let's look at some of these trade-offs. (Mainly from a server virtualisation perspective)

Unplanned Outage Severity

The biggest and most obvious problem, and one that you will be forced to live with.

In a hyperconverged environment, if you unexpectedly lose a host, (which will inevitably happen from time to time), you've also lost a storage node (and vice-versa). Whilst both your virtualisation and storage layers will have resiliency built in to them to withstand these failures, for obvious reasons this is still less desirable than if it were to happen in a non-hyperconverged environment.

Planned Outage Severity

You will at some point want to patch or upgrade hosts.

Whilst you can easily move the running VMs to another host, the storage situation is more complicated.

The worst case scenario here, and one that exists with some platforms, is that there is no way of pre-warning the storage system what you're about to do - so the reboot is treated like an unplanned outage of a storage node.

If you're holding more than 2 copies of data, this is something you might be prepared to live with, but if you're only holding 2 copies, then you are going to be holding your breath until the node is back up and/or the data is rebuilt.

Best case scenario, is that the solution has some kind of maintenance mode, although quite what that maintenance mode will do will vary from vendor to vendor.

Planned Outage Complexity

Like I said, you will at some point want to patch your hosts. Most likely, you will want to automate patching them.

For starters, applying rolling updates to a group of hosts in a hyperconverged environment will inevitably take longer, since you will need to satisfy storage availability constraints throughout the process.

It may also be more difficult to agree a suitable maintenance window, since your rolling updates are now going to affect the storage as well as the compute.

And depending on your solution, automating updates could be completely out of the question.

If you're running VMware VSAN, then VMware Update Manager is VSAN aware, which means you should be able to safely and easily, automate updating your hosts.

But some solutions just don't have this level of integration, so automating updates becomes more complex.

It might be that you can script it, although this brings in an increased likelihood of human error becoming a factor.

Worst case, you may have to go back to manually applying patches. If you only have a few hosts, this may not be such a pain point, but on a larger cluster, this is going to cause a headache for somebody.

VM Contagion

In a hyperconverged environment, your storage platform and your VMs are sharing compute.

If you're running a computationally inexpensive storage solution like ScaleIO, then this is unlikely to be a problem, even in heavily virtualised environments.

However, if you're running a storage solution that performs tasks such as inline deduplication and compression, (which many products now do) then you are at risk.

Dedupe and compression are CPU intensive tasks, and sharing your CPU time with VM's will inevitable increase the time it takes to complete these calculations - adding latency to the storage system. (I have seen this problem with more than one hyperconverged storage solution)

The most interesting approach to solve this problem I have seen comes from Simplivity, who use FPGA adapters in their hyperconverged storage nodes, then offload dedupe and compression calculations to the FPGA.

Human Error

Whilst the likelihood of human error becoming a factor is no greater with hyperconverged architectures than any other, the potential impact of human error in a hyperconverged environment is undoubtedly greater.

Consider the following worst-case scenario in a Hyper-V based hyperconverged environment.

  • An administrator makes a seemingly innocuous change to group policy with respect to Windows updates.
  • This change, inadvertently sets Windows updates to install on every node in the cluster simultaneously, resulting in multiple cluster nodes restarting in the same time window.
  • The storage cluster goes down as a result of there being more nodes down than the cluster is designed to withstand.
  • Post-update, several of the nodes refuse to boot, meaning the storage cluster remains down.
  • You are unable to fix the nodes.
  • You are forced to rebuild the entire environment from scratch, and restore everything from backup.

I've actually seen a scenario very similar to this play out on a production cluster running hundreds of VMs, and it was not pleasant. In that case, the nodes were fixed by manually uninstalling the troublesome updates using Windows recovery console, so luckily there was no need to rebuild the environment and restore all the VMs from backup.

Not everyone will be this lucky.