One of the so-called Super Sessions at iForum was conducted by Ian Pratt, who is a professor at the University of Cambridge and was one of the original founders and writers of the Xen hypervisor. His is a rather interesting story that started in the early noughties, when he decided that hypervisors should be small, stable and, most importantly, free. It wasn't until 2004, though, that it saw the spawning of XenSource, which provided a fully supported, chargeable product based on Xen. XenSource, as I would hope you're aware, was then bought by Citrix last year for 500 million dollars (hope he had some share options!). I certainly didn't want to miss the opportunity to listen to a genuinely brilliant academic and about 150 other people agreed with me.
Ian was talking about what a dynamic data centre would really comprise of. You may well be aware of a product called Provisioning Server, it originated from Citrix's acquisition of Ardence a while back. Provisioning Server does exactly what it says really, it provisions both virtual and physical machines from one image stored at the back-end. Once you have your gold build, you can replicate that out, very quickly and in a uniform fashion, to as many servers as you like. What often used to take hours can now be completed in just a few minutes.
This product, according to Pratt, transforms a deadweight lump of a data centre into an agile, dynamic, lively, moving thing, capable of automatically adapting capacity on the fly for ever-changing work-loads. It doesn't do it on its own, however. There are one or two other things that contribute to a large degree and, I must admit, I didn't immediately think of them until Ian mentioned them.
One is Workflow Studio, which is currently in beta code. This enables you to pre-program and automate run-of-the-mill tasks such as powering servers up and down, adding new users etc., by configuring a workflow on an easy-to-use GUI. It ties in with all of the Citrix products, obviously, but, interestingly, with 3rd-party products too. The possibilities are endless...
Another is EdgeSight - monitoring the performance of the end user's applications and feeding that data back in a measurable format. This would potentially provide the raw data that determine how the data centre ultimately reacts.
The other features Ian mentioned that are integral to a dynamic data-centre are embedded into the XenServer product itself.
Live migration, i.e. taking a running virtual machine and moving it from one physical server to another, provides not only great resilience and disaster recovery options, but also contributes, in no mean way, to the agility mentioned earlier. Should an SLA require it, for example, a user's session can be moved to a server that is currently less busy and can provide better performance. Alternatively, you could allow it to work as a kind of manual load balancer.
Resource pooling provides a way of allowing virtual machines to share resources and talk to one another. This enables them to make intelligent decisions on where the next VM should be created for example.
3rd-party vendor inclusion - the creators of XenSource and XenServer have always had a very open-door policy to co-operation with other software and hardware manufacturers - hardly surprising considering the open source background. This extends to storage too. A lot of companies do storage but EMC (the owners of VMWare), Symantec and NetApp are probably the best known. These companies have a an awful lot of clever machinery to do what they have to do and XenServer uses APIs to leverage this. Such things as snapshotting and machine-cloning is not done in the XenServer software but provided through partners. Why re-invent the wheel? Why not let customers continue to use what they're already using? Makes sense to me.
So, why would you want a dynamic data centre then? One partner of ours has already pointed out that, outside the enterprise, say 100 server estates and more, they don't really see the point of it all.
Pratt maintains these are the main reasons:
1) Administrative policy enforcement - backups, firewalls, malware prevention etc. can all be enforced through providing gold builds and updating those gold builds regularly. If all of your servers had the same config because there was no other way to provision a server other than to use the gold build, wouldn't you sleep a little better at night?
2) Abstracting physical world complexity, e.g. multi-path storage and networking - sorry, I was too busy writing the last bit to get what Ian meant by this and, to be honest, it was a bit too sandals, beards and BO for me.
3) Simplifies application-stack certification - once you know a particular application works on a particular OS, you can set up your system to always that app provisioned onto that server OS. That can be taken one step further by establishing which OSs run well on which hypervisors and even one step further than that by ensuring a particular hypervisor is compatible with the hardware bare metal beneath.
4) Near-native performance - we are very close to a zero footprint hypervisor these days. XenServer is not much more than 50,000-odd lines of code as it is, but with recent developments, you can hardly tell a server has a hypervisor on it in terms of the performance of that server's nuts and bolts. Not necessarily a reason TO go down the route of creating a dynamic data centre but, at least, eradication of a potential reason why NOT to.
3rd-generation virtualisation, as Ian Pratt called it, is all about moving from server consolidation to the concept of: Monitor -> Decide -> Act. In Citrix product terms, that's obviously EdgeSight -> Workflow Studio -> Provisioning Server, all underpinned by XenServer.
No comments:
Post a Comment