Thursday, 1 March 2012

Buttery biscuit base - part I

As published recently (sadly print only, not online) in ComputerScope magazine, Ireland's leading IT magazine. I can only lay official claim to part I, a colleague did part II and I nicked it. But I defy anyone else to get the phrase "buttery biscuit base" into an article on storage virtualisation...

What's stalling virtualisation?

With only around 17-20% of servers worldwide having been virtualised and the hosted desktop model still spluttering into gear, the apparently indisputable benefits of virtualisation don’t appear to be going mainstream. What are the reasons for this and how can we solve them?

Depending on which major market analyst firm you listen to, only around one fifth of servers worldwide have been virtualised with a hypervisor. Whilst other reports do paint rosier pictures, widespread adoption, at least in live environments, would seem to be yet to come. Perhaps this is good news for the virtualisation vendors, or at least their shareholders. However, to view it from a different perspective, if virtualisation is not actually being deployed as extensively as we presumed it would be, are the supposedly undeniable benefits preached by those vendors really as rock solid as they would have you believe?

The honest answer is probably yes. Those organisations which have deployed virtualisation in whatever guise are unlikely to want to go back to the way we used to do things.

Virtualisation has revolutionised organisations’ IT infrastructures in many ways. It has allowed them to re-utilise and get more out of existing hardware and reduce costs in a variety of areas. Perhaps most importantly it has also allowed them to take a much more pro-active approach to end user IT delivery and therefore productivity.

Although not a new technology, VMware and others have successfully re-invented the wheel. VMware sees virtualisation as a three stage process:

  1. Bringing production systems onto virtualised servers
  2. Tackling the priority one applications like Oracle and Exchange
  3. Delivering an entire IT infrastructure on demand or “as a service”.
The Sticking Points

Server virtualisation, however, has brought with it some unforeseen sticking points. The proliferation of virtual machines, spun up for certain tasks or test and development activities and not touched again, can have a major impact. It can affect not only the resources of that hardware but also software asset management and security best practices.

So whilst the three stage journey mentioned above is a realistic and achievable aim, there are other challenges that key stakeholders are becoming aware of. Whilst servers are certainly a key lynchpin in a company’s IT world, to not be aware of the impact of server virtualisation on other areas would be to cook your lemon cream cheese cake without its buttery biscuit base. Two of these challenges are often the ultimate, decisive factors in the success or failure of a virtualisation project: management and performance of storage.

Storage and I/O performance are key to the success of any virtualisation project, whether it’s server, application or desktop - particularly the latter. Many applications that, before the advent of virtualisation, enjoyed the luxury of direct attached storage are now having this comfort blanket ripped off them and are being asked to do the same as they always have, if not more, but using shared storage instead.

And many don’t like it. The introduction of just a millisecond of latency is enough to send them into an almighty tailspin. Multi-tenancy also brings about considerable challenges with regard to resilience. One big advantage of single-purpose servers was that, in the event of something going wrong, failures didn’t necessarily have an impact on other resources so the rigidity and continuity of a SAN infrastructure gains huge importance. Add into the mix the demands of cloud-based applications and virtual desktops, and we can begin to pick out the potential Achilles heel in all of this.

One answer to these demands is the concept of a storage hypervisor. A major enabling factor of any virtualisation product is the de-coupling of software from hardware. Ask VMware what vSphere runs on these days and they will probably give you a slightly quizzical look. Essentially, it doesn’t really matter what the badge on the front of the server says, as long as the chipset is virtualisation-aware (and pretty much all are nowadays), they don’t care who made it. You can mix and match your server manufacturers to suit requirements in exactly the same way as Declan Kidney would pick his rugby team.

So should the same not apply to storage? Are the good old days of antiquated single vendor storage silos numbered? Well the trend towards hardware agnosticism is certainly becoming more and more apparent as priorities become performance-based. Just as Kidney would not put Cian Healy at fly half or Tommy Bowe at number eight, so must IT managers be given the flexibility to choose what goes where, not be dictated to by their storage manufacturer.

Buttery biscuit base - part II

One of the magical things about virtualisation is that it’s really a sort of invisibility cloak. Each virtualisation layer hides the details of those beneath it. The result is much more efficient access to lower level resources. Applications don’t need to know about CPU, memory, and other server details to enjoy access to the resources they need.

Unfortunately, this invisibility tends to get a bit patchy when you move down into the storage infrastructure underneath all those virtualised servers, especially when considering performance management. In theory, storage virtualisation ought to be able to hide the details of the media, protocols and paths involved in managing the performance of a virtualised storage infrastructure. In reality, the machinery still tends to clatter away in plain sight.

The problem is not storage virtualisation per se, which can boost storage performance in a number of ways. The problem is a balkanised storage infrastructure, where virtualisation is supplied by hardware controllers associated with each “chunk” of storage (e.g. an array). This means that the top storage virtualisation layer is human: the hard-pressed IT personnel who have to make it all work together.

Many IT departments accept the devil’s bargain of vendor lock-in to try to avoid this. However, even if you do commit your storage fortunes to a single vendor, the pace of innovation guarantees the presence of end-of-life devices that don’t support the latest performance management features. Also, the expense of this approach puts it beyond the reach of most companies, who can’t afford a forklift upgrade to a single-vendor storage infrastructure. They have to deal with the real-world mix of storage devices that result from keeping up with innovation and competitive pressures.

It’s for these reasons, we are seeing many companies turning to storage hypervisors, which, like server hypervisors, are not tied to a particular vendor’s hardware. A storage hypervisor throws the invisibility cloak over the details of all of your storage assets, from the latest high-performance SAN to SATA disks that have been orphaned by the consolidation of a virtualisation initiative. Instead of trying to match a bunch of disparate storage devices to the needs of different applications, you can combine devices with similar performance into easily provisioned and managed virtual storage pools that hide all the unnecessary details. And, since you’re not tied to a single vendor, you can look for the best deals in storage, and keep using old storage longer.

As touched on earlier, storage performance is often the single most critical aspect of any virtualisation project. Or, to turn that on its head, it’s often the reason that rollout pilots fail. Desktop virtualisation is a very particular case in point.  The sheer amount (and therefore expense) of storage hardware required to service a truly hosted desktop model, where users can access their complete desktop from any internet-enabled device, can be vast just for a stateful model. Should the requirement be for stateless desktops, where a complete virtual desktop is assembled on demand, the difficulties intensify yet further. Then consider what happens if everyone wants this at the same time. Boot-storming is the bane of every desktop virtualisation vendor’s life. Server virtualisation is, admittedly, not quite as dramatic but right-sizing and adequate performance of storage resources is still a must.

Storage hypervisors provide performance boosts in three main ways:

  1. Caching
  2. Automated tiering
  3. Path management.
Caching.
Within a true software-only storage hypervisor, the RAM on the server that hosts it is used as a cache for all the storage assets it virtualises. Advanced write-coalescing and read pre-fetching algorithms deliver significantly faster I/O response.
Cache can also be used to compensate for the widely different traffic levels and peak loads found in virtualised server environments. It smooths out traffic surges and balances workloads more intelligently so that applications and users can work more efficiently. In general, performance of the underlying storage is easily doubled using a storage hypervisor.

Tiering.
You can also improve performance by tiering the data. All your storage assets are managed as a common pool of storage and continually monitored as to their performance. The auto-tiering technology migrates the most frequently-used data—which generally needs higher performance—onto the fastest devices. Likewise, less frequently-used data typically gets demoted to higher capacity but lower-performance devices.
Auto-tiering uses tiering profiles that dictate both the initial allocation and subsequent migration dynamics. A user can go with the standard set of default profiles or can create custom profiles for specific application access patterns. However you do it, you get the performance and capacity utilisation benefits of tiering from all your devices, regardless of manufacturer.
As new generations of devices appear such as Solid State Disks (SSD), Flash memories and very large capacity disks; these faster or larger capacity devices can simply be added to the available pool of storage devices and assigned a tier. In addition, as devices age, they can be reset to a lower tier often extending their useful life. Many users are interested in deploying SSD technology to gain performance. However, due to the high-cost and their limited write traffic life-cycles there is a clear need for auto-tiering and caching architectures to maximise their efficiency. A storage hypervisor can absorb a good deal of the write traffic thereby extending the useful life of SSDs and, with auto-tiering, only that data that needs the benefits of the high-speed SSD tier are directed there. With the storage hypervisor in place, the system self-tunes and optimises the use of all the storage devices.

Path management.
Finally, a storage hypervisor can greatly reduce the complexity of path management. The software auto-discovers the connections between storage devices and the server(s) it’s running on. It then monitors queue depth to detect congestion and route I/O in a balanced way across all possible routes to the storage in a given virtual pool. With so much reporting and monitoring data available, this allows administrators to get a level of performance across disparate devices from different vendors.
To conclude, virtualisation is not a silver bullet, but, to be fair, it was rarely touted as such either. The negative side-effects can often be mitigated, it just depends on how you approach the main one, which is storage. Do you continue to go along with the hardware manufacturers’ 3-year life cycle approach, buying overpriced hardware that is deliberately not designed to work alongside anyone else’s kit or do you take the view that disk is just disk after all, regardless of who made it? The real value is in the software that manages it and this management can be provided by a storage hypervisor. With the cost of hard drives increasing rapidly due to the floods in Thailand in 2011, never has it been more important to give yourself choice.