What's stalling virtualisation?
With only around 17-20% of servers worldwide having been virtualised and the hosted desktop model still spluttering into gear, the apparently indisputable benefits of virtualisation don’t appear to be going mainstream. What are the reasons for this and how can we solve them?
Depending on which major market analyst firm you listen to, only around one fifth of servers worldwide have been virtualised with a hypervisor. Whilst other reports do paint rosier pictures, widespread adoption, at least in live environments, would seem to be yet to come. Perhaps this is good news for the virtualisation vendors, or at least their shareholders. However, to view it from a different perspective, if virtualisation is not actually being deployed as extensively as we presumed it would be, are the supposedly undeniable benefits preached by those vendors really as rock solid as they would have you believe?
The honest answer is probably yes. Those organisations which have deployed virtualisation in whatever guise are unlikely to want to go back to the way we used to do things.
Virtualisation has revolutionised organisations’ IT infrastructures in many ways. It has allowed them to re-utilise and get more out of existing hardware and reduce costs in a variety of areas. Perhaps most importantly it has also allowed them to take a much more pro-active approach to end user IT delivery and therefore productivity.
Although not a new technology, VMware and others have successfully re-invented the wheel. VMware sees virtualisation as a three stage process:
- Bringing production systems onto virtualised servers
- Tackling the priority one applications like Oracle and Exchange
- Delivering an entire IT infrastructure on demand or “as a service”.
Server virtualisation, however, has brought with it some unforeseen sticking points. The proliferation of virtual machines, spun up for certain tasks or test and development activities and not touched again, can have a major impact. It can affect not only the resources of that hardware but also software asset management and security best practices.
So whilst the three stage journey mentioned above is a realistic and achievable aim, there are other challenges that key stakeholders are becoming aware of. Whilst servers are certainly a key lynchpin in a company’s IT world, to not be aware of the impact of server virtualisation on other areas would be to cook your lemon cream cheese cake without its buttery biscuit base. Two of these challenges are often the ultimate, decisive factors in the success or failure of a virtualisation project: management and performance of storage.
Storage and I/O performance are key to the success of any virtualisation project, whether it’s server, application or desktop - particularly the latter. Many applications that, before the advent of virtualisation, enjoyed the luxury of direct attached storage are now having this comfort blanket ripped off them and are being asked to do the same as they always have, if not more, but using shared storage instead.
And many don’t like it. The introduction of just a millisecond of latency is enough to send them into an almighty tailspin. Multi-tenancy also brings about considerable challenges with regard to resilience. One big advantage of single-purpose servers was that, in the event of something going wrong, failures didn’t necessarily have an impact on other resources so the rigidity and continuity of a SAN infrastructure gains huge importance. Add into the mix the demands of cloud-based applications and virtual desktops, and we can begin to pick out the potential Achilles heel in all of this.
One answer to these demands is the concept of a storage hypervisor. A major enabling factor of any virtualisation product is the de-coupling of software from hardware. Ask VMware what vSphere runs on these days and they will probably give you a slightly quizzical look. Essentially, it doesn’t really matter what the badge on the front of the server says, as long as the chipset is virtualisation-aware (and pretty much all are nowadays), they don’t care who made it. You can mix and match your server manufacturers to suit requirements in exactly the same way as Declan Kidney would pick his rugby team.
So should the same not apply to storage? Are the good old days of antiquated single vendor storage silos numbered? Well the trend towards hardware agnosticism is certainly becoming more and more apparent as priorities become performance-based. Just as Kidney would not put Cian Healy at fly half or Tommy Bowe at number eight, so must IT managers be given the flexibility to choose what goes where, not be dictated to by their storage manufacturer.