Thursday 4 September 2008

Virtualisation - a bit of a snapshot

Note: I wrote this for our company's newsletter, so a very broad audience. It may be somewhat low-level for the hardcore amongst you but I'll post it anyway.

Currently one of the hottest topics in the IT marketplace, virtualisation has opened up new and exciting avenues of expertise (and, ultimately, revenue) for the entire IT channel. Many billions of dollars have been pocketed by the likes of VMWare, Citrix, Microsoft and many others off the back of a) something incredibly simple if we're being honest with ourselves and b) something incredibly old (in IT years that is). Virtualisation, as a term, has come to mean many different things these days, but, for the purposes of this article, I am talking about server virtualisation.

Virtualisation is hardly revolutionary, even if it still seems fresh and shiny to a lot of us. You wouldn't manufacture a carriage for each and every person wanting to travel on a train, you wouldn't provide a drawer for each and every piece of cutlery you wanted to store and you wouldn't send an articulated lorry from London to Inverness with one box on it. So why does it appear to have taken so long for these principles to be applied to servers? Whose idea was it to dedicate a whole server to one single application anyway, when that server should supposedly be capable of "changing the way you do business" or whatever the latest vendor marketing catchphrase happens to be?

Well, IT vendors have been their own worst enemies, to a point. Software having been as infuriatingly unreliable as it is useful, administrators found out many years ago to their cost, that the more you ask a server to do, the more likely it is to fail in one or more of those tasks. Install just one application per server and you give it a fighting chance.

Unlike that piece of cutlery, whose single and only function in this world is to reduce the size of your steak and chips to the point you can eat it with some semblance of manners, software has a thousand and one things to do, on top of delivering your users the information they want from that application. Added to that, unlike software applications, a fork doesn't have to deal with millions of people all over the world thinking up increasingly ingenious ways to kill it. So, in essence, the more you cut down on what a server is expected to provide, the higher the likelihood it'll provide it.

I am being overly simplistic here, of course. But my point is that, despite the fact virtualisation solutions were invented a long time ago, up until the last few years, there were two main reasons they never really took off.

Firstly, software applications and operating systems weren't technically capable of ignoring their neighbours on a server; each application was like a spoilt child. It wanted all the hardware resources to itself and if anything else tried to butt in, it complained bitterly, then sulked, then went home and took its football with it.

Secondly, tin got cheaper. So cheap, in fact, that it didn't really make a lot of difference whether you had racks and racks of servers doing very little. Tin is still cheap today, perhaps as cheap as it will ever be, but the big differences now are that organisations are coming under increased pressure to reduce waste and run their departments in a more responsible, ecological fashion and, of course, in the last couple of years, the costs of running that cheap tin have sky-rocketed.

The development of the modern hypervisor and improvements in general software architecture changed some of this. Although IBM developed virtualisation technology back in the 1960s, long before VMWare brought out their ESX product, it is widely accepted that VMWare were the founders of modern virtualisation techniques. The ESX hypervisor effectively stood as a sort of strict parent in the midst of those spoilt children, making sure none of them got too obnoxious. The hypervisor sat between the mechanics of a server and the software running on it, in this case an operating system, dishing out commands back and forth between the two entities.

Then, along came Professor Ian Pratt from Cambridge University and his open source Xen project. They decided to make a hypervisor the way they would have done all along if anybody had actually thought about it properly. Rather than dishing out these commands yourself (emulation), why not get the component parts to talk to each other directly (paravirtualisation)? The chipset knows it's being virtualised and the operating systems are also aware. Everything runs quicker and is more stable. In fact, this solution was so successful, not only did Citrix agree to shell out 500 million dollars for the project's commercial arm in 2007 but Microsoft, late to the party, also went down this route with their Hyper-V product.

Software vendors are also starting to play their part. Making applications act less like spoilt children was actually their duty, but, increasingly now, they are also adapting them to better suit virtualised environments. Add in the soaring costs of electricity and a general malaise in the global economy (I refuse to succumb to sensationalism and mention the R word), IT Directors now owe it to themselves and their businesses to look at virtualisation, simply as a cost-saving mechanism, irrespective of the IT management advantages it offers.

Companies such as Citrix (who, arguably, have been "virtualising" applications with their Presentation Server product, now XenApp, for over 15 years) are taking the fundament that is server virtualisation and building on it - to great effect. Citrix call it the Dynamic Data Centre. By this, they mean adapting what is currently little more than a static data repository into a constantly-changing, ever-flexible delivery hub for information and data.

Instead of server farms running all day and all night, requiring irritating downtime (albeit scheduled) when maintenance needs carrying out plus always running the same workload (i.e. applications), Citrix products enable those servers to be put into production as and when required - and, if you so wish, with a different workload on them each time. Automating all of this according to usage metrics and applying the same principles to the desktop, as well as the data centre, are the next exciting steps on a journey that started almost 50 years ago but is only now becoming a reality.

1 comment:

server virtualization said...

As a Dell employee, I think your blog on virtualisation is quite impressive. I think it helps throwing a light on the other side of the server virtualisation as a technology.